00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2089 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3354 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.022 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.024 The recommended git tool is: git 00:00:00.024 using credential 00000000-0000-0000-0000-000000000002 00:00:00.026 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.038 Fetching changes from the remote Git repository 00:00:00.040 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.056 Using shallow fetch with depth 1 00:00:00.056 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.056 > git --version # timeout=10 00:00:00.074 > git --version # 'git version 2.39.2' 00:00:00.074 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.100 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.100 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.271 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.281 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.291 Checking out Revision 0a225b996e8e47b7b6e5d33ac6084f3fe4396df2 (FETCH_HEAD) 00:00:02.291 > git config core.sparsecheckout # timeout=10 00:00:02.302 > git read-tree -mu HEAD # timeout=10 00:00:02.317 > git checkout -f 0a225b996e8e47b7b6e5d33ac6084f3fe4396df2 # timeout=5 00:00:02.337 Commit message: "perf/nvme: run eraser.sh dd command in parallel" 00:00:02.337 > git rev-list --no-walk 0a225b996e8e47b7b6e5d33ac6084f3fe4396df2 # timeout=10 00:00:02.560 [Pipeline] Start of Pipeline 00:00:02.574 [Pipeline] library 00:00:02.576 Loading library shm_lib@master 00:00:02.576 Library shm_lib@master is cached. Copying from home. 00:00:02.596 [Pipeline] node 00:00:02.605 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.607 [Pipeline] { 00:00:02.619 [Pipeline] catchError 00:00:02.621 [Pipeline] { 00:00:02.634 [Pipeline] wrap 00:00:02.644 [Pipeline] { 00:00:02.654 [Pipeline] stage 00:00:02.656 [Pipeline] { (Prologue) 00:00:02.678 [Pipeline] echo 00:00:02.680 Node: VM-host-WFP7 00:00:02.688 [Pipeline] cleanWs 00:00:02.702 [WS-CLEANUP] Deleting project workspace... 00:00:02.702 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.711 [WS-CLEANUP] done 00:00:02.906 [Pipeline] setCustomBuildProperty 00:00:02.997 [Pipeline] httpRequest 00:00:03.016 [Pipeline] echo 00:00:03.018 Sorcerer 10.211.164.101 is alive 00:00:03.027 [Pipeline] retry 00:00:03.029 [Pipeline] { 00:00:03.042 [Pipeline] httpRequest 00:00:03.047 HttpMethod: GET 00:00:03.047 URL: http://10.211.164.101/packages/jbp_0a225b996e8e47b7b6e5d33ac6084f3fe4396df2.tar.gz 00:00:03.048 Sending request to url: http://10.211.164.101/packages/jbp_0a225b996e8e47b7b6e5d33ac6084f3fe4396df2.tar.gz 00:00:03.049 Response Code: HTTP/1.1 200 OK 00:00:03.049 Success: Status code 200 is in the accepted range: 200,404 00:00:03.050 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_0a225b996e8e47b7b6e5d33ac6084f3fe4396df2.tar.gz 00:00:03.259 [Pipeline] } 00:00:03.275 [Pipeline] // retry 00:00:03.283 [Pipeline] sh 00:00:03.563 + tar --no-same-owner -xf jbp_0a225b996e8e47b7b6e5d33ac6084f3fe4396df2.tar.gz 00:00:03.580 [Pipeline] httpRequest 00:00:03.595 [Pipeline] echo 00:00:03.597 Sorcerer 10.211.164.101 is alive 00:00:03.605 [Pipeline] retry 00:00:03.607 [Pipeline] { 00:00:03.621 [Pipeline] httpRequest 00:00:03.626 HttpMethod: GET 00:00:03.627 URL: http://10.211.164.101/packages/spdk_d476702647b50ee733c611b156872c5174f339ff.tar.gz 00:00:03.627 Sending request to url: http://10.211.164.101/packages/spdk_d476702647b50ee733c611b156872c5174f339ff.tar.gz 00:00:03.628 Response Code: HTTP/1.1 200 OK 00:00:03.629 Success: Status code 200 is in the accepted range: 200,404 00:00:03.629 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_d476702647b50ee733c611b156872c5174f339ff.tar.gz 00:00:17.512 [Pipeline] } 00:00:17.529 [Pipeline] // retry 00:00:17.537 [Pipeline] sh 00:00:17.822 + tar --no-same-owner -xf spdk_d476702647b50ee733c611b156872c5174f339ff.tar.gz 00:00:20.377 [Pipeline] sh 00:00:20.663 + git -C spdk log --oneline -n5 00:00:20.663 d47670264 test/unit: only run fsdev unit test when SPDK_CONFIG_FSDEV is defined 00:00:20.663 342eca0d6 doc: add Vhost and CVL RoCEv2 performance reports 00:00:20.663 7c739692e env_dpdk: restore opts_size after opts structure is zeroed 00:00:20.663 ff89983c5 script/rpc.py: Provide necessary params for bdev_compress_create 00:00:20.663 5dc1c71d6 util: add SPDK_FIELD_VALID() macro 00:00:20.686 [Pipeline] withCredentials 00:00:20.697 > git --version # timeout=10 00:00:20.710 > git --version # 'git version 2.39.2' 00:00:20.729 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:20.731 [Pipeline] { 00:00:20.741 [Pipeline] retry 00:00:20.743 [Pipeline] { 00:00:20.759 [Pipeline] sh 00:00:21.044 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:21.319 [Pipeline] } 00:00:21.336 [Pipeline] // retry 00:00:21.341 [Pipeline] } 00:00:21.357 [Pipeline] // withCredentials 00:00:21.367 [Pipeline] httpRequest 00:00:21.391 [Pipeline] echo 00:00:21.393 Sorcerer 10.211.164.101 is alive 00:00:21.403 [Pipeline] retry 00:00:21.405 [Pipeline] { 00:00:21.420 [Pipeline] httpRequest 00:00:21.425 HttpMethod: GET 00:00:21.425 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:21.426 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:21.448 Response Code: HTTP/1.1 200 OK 00:00:21.448 Success: Status code 200 is in the accepted range: 200,404 00:00:21.449 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:05.913 [Pipeline] } 00:01:05.930 [Pipeline] // retry 00:01:05.938 [Pipeline] sh 00:01:06.221 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:07.614 [Pipeline] sh 00:01:07.898 + git -C dpdk log --oneline -n5 00:01:07.898 caf0f5d395 version: 22.11.4 00:01:07.898 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:07.898 dc9c799c7d vhost: fix missing spinlock unlock 00:01:07.898 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:07.898 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:07.917 [Pipeline] writeFile 00:01:07.932 [Pipeline] sh 00:01:08.217 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:08.229 [Pipeline] sh 00:01:08.514 + cat autorun-spdk.conf 00:01:08.514 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.514 SPDK_RUN_ASAN=1 00:01:08.514 SPDK_RUN_UBSAN=1 00:01:08.514 SPDK_TEST_RAID=1 00:01:08.514 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:08.514 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:08.514 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:08.522 RUN_NIGHTLY=1 00:01:08.524 [Pipeline] } 00:01:08.538 [Pipeline] // stage 00:01:08.555 [Pipeline] stage 00:01:08.557 [Pipeline] { (Run VM) 00:01:08.570 [Pipeline] sh 00:01:08.878 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:08.878 + echo 'Start stage prepare_nvme.sh' 00:01:08.878 Start stage prepare_nvme.sh 00:01:08.878 + [[ -n 0 ]] 00:01:08.878 + disk_prefix=ex0 00:01:08.878 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:08.878 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:08.879 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:08.879 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.879 ++ SPDK_RUN_ASAN=1 00:01:08.879 ++ SPDK_RUN_UBSAN=1 00:01:08.879 ++ SPDK_TEST_RAID=1 00:01:08.879 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:08.879 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:08.879 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:08.879 ++ RUN_NIGHTLY=1 00:01:08.879 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:08.879 + nvme_files=() 00:01:08.879 + declare -A nvme_files 00:01:08.879 + backend_dir=/var/lib/libvirt/images/backends 00:01:08.879 + nvme_files['nvme.img']=5G 00:01:08.879 + nvme_files['nvme-cmb.img']=5G 00:01:08.879 + nvme_files['nvme-multi0.img']=4G 00:01:08.879 + nvme_files['nvme-multi1.img']=4G 00:01:08.879 + nvme_files['nvme-multi2.img']=4G 00:01:08.879 + nvme_files['nvme-openstack.img']=8G 00:01:08.879 + nvme_files['nvme-zns.img']=5G 00:01:08.879 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:08.879 + (( SPDK_TEST_FTL == 1 )) 00:01:08.879 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:08.879 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:08.879 + for nvme in "${!nvme_files[@]}" 00:01:08.879 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:08.879 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:08.879 + for nvme in "${!nvme_files[@]}" 00:01:08.879 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:08.879 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:08.879 + for nvme in "${!nvme_files[@]}" 00:01:08.879 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:08.879 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:08.879 + for nvme in "${!nvme_files[@]}" 00:01:08.879 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:08.879 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:08.879 + for nvme in "${!nvme_files[@]}" 00:01:08.879 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:08.879 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:08.879 + for nvme in "${!nvme_files[@]}" 00:01:08.879 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:08.879 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:08.879 + for nvme in "${!nvme_files[@]}" 00:01:08.879 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:09.138 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.138 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:09.138 + echo 'End stage prepare_nvme.sh' 00:01:09.139 End stage prepare_nvme.sh 00:01:09.150 [Pipeline] sh 00:01:09.434 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:09.434 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:09.434 00:01:09.434 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:09.434 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:09.434 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:09.434 HELP=0 00:01:09.434 DRY_RUN=0 00:01:09.434 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:09.434 NVME_DISKS_TYPE=nvme,nvme, 00:01:09.434 NVME_AUTO_CREATE=0 00:01:09.434 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:09.434 NVME_CMB=,, 00:01:09.434 NVME_PMR=,, 00:01:09.434 NVME_ZNS=,, 00:01:09.434 NVME_MS=,, 00:01:09.434 NVME_FDP=,, 00:01:09.434 SPDK_VAGRANT_DISTRO=fedora39 00:01:09.434 SPDK_VAGRANT_VMCPU=10 00:01:09.434 SPDK_VAGRANT_VMRAM=12288 00:01:09.434 SPDK_VAGRANT_PROVIDER=libvirt 00:01:09.434 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:09.434 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:09.434 SPDK_OPENSTACK_NETWORK=0 00:01:09.434 VAGRANT_PACKAGE_BOX=0 00:01:09.434 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:09.434 FORCE_DISTRO=true 00:01:09.434 VAGRANT_BOX_VERSION= 00:01:09.434 EXTRA_VAGRANTFILES= 00:01:09.434 NIC_MODEL=virtio 00:01:09.434 00:01:09.434 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:09.434 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:11.972 Bringing machine 'default' up with 'libvirt' provider... 00:01:11.972 ==> default: Creating image (snapshot of base box volume). 00:01:12.232 ==> default: Creating domain with the following settings... 00:01:12.232 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1723617159_75787740eda521fc9ab0 00:01:12.232 ==> default: -- Domain type: kvm 00:01:12.232 ==> default: -- Cpus: 10 00:01:12.232 ==> default: -- Feature: acpi 00:01:12.232 ==> default: -- Feature: apic 00:01:12.232 ==> default: -- Feature: pae 00:01:12.232 ==> default: -- Memory: 12288M 00:01:12.232 ==> default: -- Memory Backing: hugepages: 00:01:12.232 ==> default: -- Management MAC: 00:01:12.232 ==> default: -- Loader: 00:01:12.232 ==> default: -- Nvram: 00:01:12.232 ==> default: -- Base box: spdk/fedora39 00:01:12.232 ==> default: -- Storage pool: default 00:01:12.232 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1723617159_75787740eda521fc9ab0.img (20G) 00:01:12.232 ==> default: -- Volume Cache: default 00:01:12.232 ==> default: -- Kernel: 00:01:12.232 ==> default: -- Initrd: 00:01:12.232 ==> default: -- Graphics Type: vnc 00:01:12.232 ==> default: -- Graphics Port: -1 00:01:12.232 ==> default: -- Graphics IP: 127.0.0.1 00:01:12.232 ==> default: -- Graphics Password: Not defined 00:01:12.232 ==> default: -- Video Type: cirrus 00:01:12.232 ==> default: -- Video VRAM: 9216 00:01:12.232 ==> default: -- Sound Type: 00:01:12.232 ==> default: -- Keymap: en-us 00:01:12.232 ==> default: -- TPM Path: 00:01:12.232 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:12.232 ==> default: -- Command line args: 00:01:12.232 ==> default: -> value=-device, 00:01:12.232 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:12.232 ==> default: -> value=-drive, 00:01:12.232 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:12.232 ==> default: -> value=-device, 00:01:12.232 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.232 ==> default: -> value=-device, 00:01:12.232 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:12.232 ==> default: -> value=-drive, 00:01:12.232 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:12.232 ==> default: -> value=-device, 00:01:12.232 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.232 ==> default: -> value=-drive, 00:01:12.232 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:12.232 ==> default: -> value=-device, 00:01:12.232 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.232 ==> default: -> value=-drive, 00:01:12.232 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:12.232 ==> default: -> value=-device, 00:01:12.232 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.232 ==> default: Creating shared folders metadata... 00:01:12.232 ==> default: Starting domain. 00:01:14.139 ==> default: Waiting for domain to get an IP address... 00:01:32.235 ==> default: Waiting for SSH to become available... 00:01:32.235 ==> default: Configuring and enabling network interfaces... 00:01:37.514 default: SSH address: 192.168.121.49:22 00:01:37.515 default: SSH username: vagrant 00:01:37.515 default: SSH auth method: private key 00:01:39.420 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:47.540 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:52.837 ==> default: Mounting SSHFS shared folder... 00:01:55.374 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:55.374 ==> default: Checking Mount.. 00:01:56.754 ==> default: Folder Successfully Mounted! 00:01:56.754 ==> default: Running provisioner: file... 00:01:57.694 default: ~/.gitconfig => .gitconfig 00:01:58.264 00:01:58.264 SUCCESS! 00:01:58.264 00:01:58.264 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:58.264 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:58.264 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:58.264 00:01:58.274 [Pipeline] } 00:01:58.289 [Pipeline] // stage 00:01:58.299 [Pipeline] dir 00:01:58.299 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:58.301 [Pipeline] { 00:01:58.314 [Pipeline] catchError 00:01:58.316 [Pipeline] { 00:01:58.328 [Pipeline] sh 00:01:58.612 + vagrant ssh-config --host vagrant 00:01:58.612 + sed -ne /^Host/,$p 00:01:58.612 + tee ssh_conf 00:02:01.151 Host vagrant 00:02:01.151 HostName 192.168.121.49 00:02:01.151 User vagrant 00:02:01.151 Port 22 00:02:01.151 UserKnownHostsFile /dev/null 00:02:01.151 StrictHostKeyChecking no 00:02:01.151 PasswordAuthentication no 00:02:01.151 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:01.151 IdentitiesOnly yes 00:02:01.151 LogLevel FATAL 00:02:01.151 ForwardAgent yes 00:02:01.151 ForwardX11 yes 00:02:01.151 00:02:01.165 [Pipeline] withEnv 00:02:01.168 [Pipeline] { 00:02:01.182 [Pipeline] sh 00:02:01.466 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:01.466 source /etc/os-release 00:02:01.466 [[ -e /image.version ]] && img=$(< /image.version) 00:02:01.466 # Minimal, systemd-like check. 00:02:01.466 if [[ -e /.dockerenv ]]; then 00:02:01.466 # Clear garbage from the node's name: 00:02:01.466 # agt-er_autotest_547-896 -> autotest_547-896 00:02:01.466 # $HOSTNAME is the actual container id 00:02:01.466 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:01.466 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:01.466 # We can assume this is a mount from a host where container is running, 00:02:01.466 # so fetch its hostname to easily identify the target swarm worker. 00:02:01.466 container="$(< /etc/hostname) ($agent)" 00:02:01.466 else 00:02:01.466 # Fallback 00:02:01.466 container=$agent 00:02:01.466 fi 00:02:01.466 fi 00:02:01.466 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:01.466 00:02:01.737 [Pipeline] } 00:02:01.753 [Pipeline] // withEnv 00:02:01.761 [Pipeline] setCustomBuildProperty 00:02:01.776 [Pipeline] stage 00:02:01.778 [Pipeline] { (Tests) 00:02:01.795 [Pipeline] sh 00:02:02.091 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:02.383 [Pipeline] sh 00:02:02.668 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:02.943 [Pipeline] timeout 00:02:02.943 Timeout set to expire in 1 hr 30 min 00:02:02.945 [Pipeline] { 00:02:02.958 [Pipeline] sh 00:02:03.240 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:03.811 HEAD is now at d47670264 test/unit: only run fsdev unit test when SPDK_CONFIG_FSDEV is defined 00:02:03.824 [Pipeline] sh 00:02:04.108 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:04.382 [Pipeline] sh 00:02:04.665 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:04.941 [Pipeline] sh 00:02:05.223 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:05.483 ++ readlink -f spdk_repo 00:02:05.483 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:05.483 + [[ -n /home/vagrant/spdk_repo ]] 00:02:05.483 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:05.483 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:05.483 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:05.483 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:05.483 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:05.483 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:05.483 + cd /home/vagrant/spdk_repo 00:02:05.483 + source /etc/os-release 00:02:05.483 ++ NAME='Fedora Linux' 00:02:05.483 ++ VERSION='39 (Cloud Edition)' 00:02:05.483 ++ ID=fedora 00:02:05.483 ++ VERSION_ID=39 00:02:05.483 ++ VERSION_CODENAME= 00:02:05.483 ++ PLATFORM_ID=platform:f39 00:02:05.483 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:05.483 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:05.483 ++ LOGO=fedora-logo-icon 00:02:05.483 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:05.483 ++ HOME_URL=https://fedoraproject.org/ 00:02:05.483 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:05.483 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:05.483 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:05.483 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:05.483 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:05.483 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:05.483 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:05.483 ++ SUPPORT_END=2024-11-12 00:02:05.483 ++ VARIANT='Cloud Edition' 00:02:05.483 ++ VARIANT_ID=cloud 00:02:05.483 + uname -a 00:02:05.483 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:05.483 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:06.053 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:06.053 Hugepages 00:02:06.053 node hugesize free / total 00:02:06.053 node0 1048576kB 0 / 0 00:02:06.053 node0 2048kB 0 / 0 00:02:06.053 00:02:06.053 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:06.053 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:06.053 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:06.053 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:06.053 + rm -f /tmp/spdk-ld-path 00:02:06.053 + source autorun-spdk.conf 00:02:06.053 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.053 ++ SPDK_RUN_ASAN=1 00:02:06.053 ++ SPDK_RUN_UBSAN=1 00:02:06.053 ++ SPDK_TEST_RAID=1 00:02:06.053 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:06.053 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:06.053 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.053 ++ RUN_NIGHTLY=1 00:02:06.053 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:06.053 + [[ -n '' ]] 00:02:06.053 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:06.053 + for M in /var/spdk/build-*-manifest.txt 00:02:06.053 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:06.053 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.053 + for M in /var/spdk/build-*-manifest.txt 00:02:06.053 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:06.053 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.053 + for M in /var/spdk/build-*-manifest.txt 00:02:06.053 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:06.053 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.313 ++ uname 00:02:06.313 + [[ Linux == \L\i\n\u\x ]] 00:02:06.313 + sudo dmesg -T 00:02:06.313 + sudo dmesg --clear 00:02:06.313 + dmesg_pid=6170 00:02:06.313 + [[ Fedora Linux == FreeBSD ]] 00:02:06.313 + sudo dmesg -Tw 00:02:06.313 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.313 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.313 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:06.313 + [[ -x /usr/src/fio-static/fio ]] 00:02:06.313 + export FIO_BIN=/usr/src/fio-static/fio 00:02:06.313 + FIO_BIN=/usr/src/fio-static/fio 00:02:06.313 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:06.313 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:06.313 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:06.313 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.313 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.313 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:06.313 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.313 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.313 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:06.313 Test configuration: 00:02:06.313 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.313 SPDK_RUN_ASAN=1 00:02:06.313 SPDK_RUN_UBSAN=1 00:02:06.313 SPDK_TEST_RAID=1 00:02:06.313 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:06.313 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:06.313 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.313 RUN_NIGHTLY=1 06:33:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:06.313 06:33:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:06.313 06:33:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:06.313 06:33:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:06.313 06:33:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.313 06:33:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.313 06:33:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.313 06:33:33 -- paths/export.sh@5 -- $ export PATH 00:02:06.313 06:33:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.313 06:33:33 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:06.313 06:33:33 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:06.313 06:33:33 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1723617213.XXXXXX 00:02:06.313 06:33:33 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1723617213.sEidqV 00:02:06.313 06:33:33 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:06.313 06:33:33 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:02:06.313 06:33:33 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:06.585 06:33:33 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:06.586 06:33:33 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:06.586 06:33:33 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:06.586 06:33:33 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:06.586 06:33:33 -- common/autotest_common.sh@394 -- $ xtrace_disable 00:02:06.586 06:33:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.586 06:33:33 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:06.586 06:33:33 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:06.586 06:33:33 -- pm/common@17 -- $ local monitor 00:02:06.586 06:33:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.586 06:33:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.586 06:33:33 -- pm/common@25 -- $ sleep 1 00:02:06.586 06:33:33 -- pm/common@21 -- $ date +%s 00:02:06.586 06:33:33 -- pm/common@21 -- $ date +%s 00:02:06.586 06:33:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1723617213 00:02:06.586 06:33:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1723617213 00:02:06.586 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1723617213_collect-vmstat.pm.log 00:02:06.586 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1723617213_collect-cpu-load.pm.log 00:02:07.526 06:33:34 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:07.526 06:33:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:07.526 06:33:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:07.526 06:33:34 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:07.526 06:33:34 -- spdk/autobuild.sh@16 -- $ date -u 00:02:07.526 Wed Aug 14 06:33:34 AM UTC 2024 00:02:07.526 06:33:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:07.526 v24.09-pre-416-gd47670264 00:02:07.526 06:33:34 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:07.526 06:33:34 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:07.526 06:33:34 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:07.526 06:33:34 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:07.526 06:33:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.526 ************************************ 00:02:07.526 START TEST asan 00:02:07.526 ************************************ 00:02:07.526 using asan 00:02:07.526 06:33:34 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:02:07.526 00:02:07.526 real 0m0.001s 00:02:07.526 user 0m0.001s 00:02:07.526 sys 0m0.000s 00:02:07.526 06:33:34 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:07.526 06:33:34 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:07.526 ************************************ 00:02:07.526 END TEST asan 00:02:07.526 ************************************ 00:02:07.526 06:33:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:07.526 06:33:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:07.526 06:33:34 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:07.526 06:33:34 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:07.526 06:33:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.526 ************************************ 00:02:07.526 START TEST ubsan 00:02:07.526 ************************************ 00:02:07.526 using ubsan 00:02:07.526 06:33:34 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:07.526 00:02:07.526 real 0m0.000s 00:02:07.526 user 0m0.000s 00:02:07.526 sys 0m0.000s 00:02:07.526 06:33:34 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:07.526 06:33:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:07.526 ************************************ 00:02:07.526 END TEST ubsan 00:02:07.526 ************************************ 00:02:07.787 06:33:34 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:07.787 06:33:34 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:07.787 06:33:34 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:07.787 06:33:34 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:07.787 06:33:34 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:07.787 06:33:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.787 ************************************ 00:02:07.787 START TEST build_native_dpdk 00:02:07.787 ************************************ 00:02:07.787 06:33:34 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:07.787 caf0f5d395 version: 22.11.4 00:02:07.787 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:07.787 dc9c799c7d vhost: fix missing spinlock unlock 00:02:07.787 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:07.787 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:07.787 patching file config/rte_config.h 00:02:07.787 Hunk #1 succeeded at 60 (offset 1 line). 00:02:07.787 06:33:34 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:07.787 06:33:34 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:07.788 06:33:34 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:02:07.788 06:33:34 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:07.788 patching file lib/pcapng/rte_pcapng.c 00:02:07.788 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:07.788 06:33:34 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:07.788 06:33:34 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:02:07.788 06:33:34 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:07.788 06:33:34 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:07.788 06:33:34 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:14.361 The Meson build system 00:02:14.361 Version: 1.5.0 00:02:14.361 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:14.361 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:14.361 Build type: native build 00:02:14.361 Program cat found: YES (/usr/bin/cat) 00:02:14.361 Project name: DPDK 00:02:14.361 Project version: 22.11.4 00:02:14.361 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:14.361 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:14.361 Host machine cpu family: x86_64 00:02:14.361 Host machine cpu: x86_64 00:02:14.361 Message: ## Building in Developer Mode ## 00:02:14.361 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.361 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:14.361 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.361 Program objdump found: YES (/usr/bin/objdump) 00:02:14.361 Program python3 found: YES (/usr/bin/python3) 00:02:14.361 Program cat found: YES (/usr/bin/cat) 00:02:14.361 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:14.361 Checking for size of "void *" : 8 00:02:14.361 Checking for size of "void *" : 8 (cached) 00:02:14.361 Library m found: YES 00:02:14.361 Library numa found: YES 00:02:14.361 Has header "numaif.h" : YES 00:02:14.361 Library fdt found: NO 00:02:14.361 Library execinfo found: NO 00:02:14.361 Has header "execinfo.h" : YES 00:02:14.361 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.361 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.361 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.361 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.361 Run-time dependency openssl found: YES 3.1.1 00:02:14.361 Run-time dependency libpcap found: YES 1.10.4 00:02:14.361 Has header "pcap.h" with dependency libpcap: YES 00:02:14.361 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.361 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.361 Compiler for C supports arguments -Wformat: YES 00:02:14.361 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:14.361 Compiler for C supports arguments -Wformat-security: NO 00:02:14.361 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.361 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.361 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.361 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.361 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.361 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.361 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.361 Compiler for C supports arguments -Wundef: YES 00:02:14.361 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.361 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.361 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.361 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.361 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:14.361 Compiler for C supports arguments -mavx512f: YES 00:02:14.361 Checking if "AVX512 checking" compiles: YES 00:02:14.361 Fetching value of define "__SSE4_2__" : 1 00:02:14.361 Fetching value of define "__AES__" : 1 00:02:14.361 Fetching value of define "__AVX__" : 1 00:02:14.361 Fetching value of define "__AVX2__" : 1 00:02:14.361 Fetching value of define "__AVX512BW__" : 1 00:02:14.361 Fetching value of define "__AVX512CD__" : 1 00:02:14.361 Fetching value of define "__AVX512DQ__" : 1 00:02:14.361 Fetching value of define "__AVX512F__" : 1 00:02:14.361 Fetching value of define "__AVX512VL__" : 1 00:02:14.361 Fetching value of define "__PCLMUL__" : 1 00:02:14.361 Fetching value of define "__RDRND__" : 1 00:02:14.361 Fetching value of define "__RDSEED__" : 1 00:02:14.361 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:14.361 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.361 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.361 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.361 Checking for function "getentropy" : YES 00:02:14.361 Message: lib/eal: Defining dependency "eal" 00:02:14.361 Message: lib/ring: Defining dependency "ring" 00:02:14.361 Message: lib/rcu: Defining dependency "rcu" 00:02:14.361 Message: lib/mempool: Defining dependency "mempool" 00:02:14.361 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.361 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.361 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.361 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.361 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:14.361 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:14.361 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:14.361 Compiler for C supports arguments -mpclmul: YES 00:02:14.361 Compiler for C supports arguments -maes: YES 00:02:14.361 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.361 Compiler for C supports arguments -mavx512bw: YES 00:02:14.361 Compiler for C supports arguments -mavx512dq: YES 00:02:14.361 Compiler for C supports arguments -mavx512vl: YES 00:02:14.361 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.361 Compiler for C supports arguments -mavx2: YES 00:02:14.361 Compiler for C supports arguments -mavx: YES 00:02:14.361 Message: lib/net: Defining dependency "net" 00:02:14.361 Message: lib/meter: Defining dependency "meter" 00:02:14.361 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.361 Message: lib/pci: Defining dependency "pci" 00:02:14.361 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.361 Message: lib/metrics: Defining dependency "metrics" 00:02:14.361 Message: lib/hash: Defining dependency "hash" 00:02:14.361 Message: lib/timer: Defining dependency "timer" 00:02:14.361 Fetching value of define "__AVX2__" : 1 (cached) 00:02:14.361 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.361 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:14.362 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:14.362 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.362 Message: lib/acl: Defining dependency "acl" 00:02:14.362 Message: lib/bbdev: Defining dependency "bbdev" 00:02:14.362 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:14.362 Run-time dependency libelf found: YES 0.191 00:02:14.362 Message: lib/bpf: Defining dependency "bpf" 00:02:14.362 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:14.362 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.362 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.362 Message: lib/distributor: Defining dependency "distributor" 00:02:14.362 Message: lib/efd: Defining dependency "efd" 00:02:14.362 Message: lib/eventdev: Defining dependency "eventdev" 00:02:14.362 Message: lib/gpudev: Defining dependency "gpudev" 00:02:14.362 Message: lib/gro: Defining dependency "gro" 00:02:14.362 Message: lib/gso: Defining dependency "gso" 00:02:14.362 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:14.362 Message: lib/jobstats: Defining dependency "jobstats" 00:02:14.362 Message: lib/latencystats: Defining dependency "latencystats" 00:02:14.362 Message: lib/lpm: Defining dependency "lpm" 00:02:14.362 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.362 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:14.362 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:14.362 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:14.362 Message: lib/member: Defining dependency "member" 00:02:14.362 Message: lib/pcapng: Defining dependency "pcapng" 00:02:14.362 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.362 Message: lib/power: Defining dependency "power" 00:02:14.362 Message: lib/rawdev: Defining dependency "rawdev" 00:02:14.362 Message: lib/regexdev: Defining dependency "regexdev" 00:02:14.362 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.362 Message: lib/rib: Defining dependency "rib" 00:02:14.362 Message: lib/reorder: Defining dependency "reorder" 00:02:14.362 Message: lib/sched: Defining dependency "sched" 00:02:14.362 Message: lib/security: Defining dependency "security" 00:02:14.362 Message: lib/stack: Defining dependency "stack" 00:02:14.362 Has header "linux/userfaultfd.h" : YES 00:02:14.362 Message: lib/vhost: Defining dependency "vhost" 00:02:14.362 Message: lib/ipsec: Defining dependency "ipsec" 00:02:14.362 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.362 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:14.362 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.362 Message: lib/fib: Defining dependency "fib" 00:02:14.362 Message: lib/port: Defining dependency "port" 00:02:14.362 Message: lib/pdump: Defining dependency "pdump" 00:02:14.362 Message: lib/table: Defining dependency "table" 00:02:14.362 Message: lib/pipeline: Defining dependency "pipeline" 00:02:14.362 Message: lib/graph: Defining dependency "graph" 00:02:14.362 Message: lib/node: Defining dependency "node" 00:02:14.362 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:14.362 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:14.362 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:14.362 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:14.362 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:14.362 Compiler for C supports arguments -Wno-unused-value: YES 00:02:14.362 Compiler for C supports arguments -Wno-format: YES 00:02:14.362 Compiler for C supports arguments -Wno-format-security: YES 00:02:14.362 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:14.362 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:14.932 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:14.932 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:14.932 Fetching value of define "__AVX2__" : 1 (cached) 00:02:14.932 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.932 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.932 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.932 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:14.932 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:14.932 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:14.932 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:14.932 Configuring doxy-api.conf using configuration 00:02:14.932 Program sphinx-build found: NO 00:02:14.932 Configuring rte_build_config.h using configuration 00:02:14.932 Message: 00:02:14.932 ================= 00:02:14.932 Applications Enabled 00:02:14.932 ================= 00:02:14.932 00:02:14.932 apps: 00:02:14.932 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:14.932 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:14.932 test-security-perf, 00:02:14.932 00:02:14.932 Message: 00:02:14.932 ================= 00:02:14.932 Libraries Enabled 00:02:14.932 ================= 00:02:14.932 00:02:14.932 libs: 00:02:14.932 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:14.932 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:14.932 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:14.932 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:14.932 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:14.932 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:14.932 table, pipeline, graph, node, 00:02:14.932 00:02:14.932 Message: 00:02:14.932 =============== 00:02:14.932 Drivers Enabled 00:02:14.932 =============== 00:02:14.932 00:02:14.932 common: 00:02:14.932 00:02:14.932 bus: 00:02:14.932 pci, vdev, 00:02:14.932 mempool: 00:02:14.932 ring, 00:02:14.932 dma: 00:02:14.932 00:02:14.932 net: 00:02:14.932 i40e, 00:02:14.932 raw: 00:02:14.932 00:02:14.932 crypto: 00:02:14.932 00:02:14.932 compress: 00:02:14.932 00:02:14.932 regex: 00:02:14.932 00:02:14.932 vdpa: 00:02:14.932 00:02:14.932 event: 00:02:14.932 00:02:14.932 baseband: 00:02:14.932 00:02:14.932 gpu: 00:02:14.932 00:02:14.932 00:02:14.932 Message: 00:02:14.932 ================= 00:02:14.932 Content Skipped 00:02:14.932 ================= 00:02:14.932 00:02:14.932 apps: 00:02:14.932 00:02:14.932 libs: 00:02:14.932 kni: explicitly disabled via build config (deprecated lib) 00:02:14.932 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:14.932 00:02:14.932 drivers: 00:02:14.932 common/cpt: not in enabled drivers build config 00:02:14.932 common/dpaax: not in enabled drivers build config 00:02:14.932 common/iavf: not in enabled drivers build config 00:02:14.932 common/idpf: not in enabled drivers build config 00:02:14.932 common/mvep: not in enabled drivers build config 00:02:14.932 common/octeontx: not in enabled drivers build config 00:02:14.932 bus/auxiliary: not in enabled drivers build config 00:02:14.932 bus/dpaa: not in enabled drivers build config 00:02:14.932 bus/fslmc: not in enabled drivers build config 00:02:14.932 bus/ifpga: not in enabled drivers build config 00:02:14.932 bus/vmbus: not in enabled drivers build config 00:02:14.932 common/cnxk: not in enabled drivers build config 00:02:14.932 common/mlx5: not in enabled drivers build config 00:02:14.932 common/qat: not in enabled drivers build config 00:02:14.932 common/sfc_efx: not in enabled drivers build config 00:02:14.932 mempool/bucket: not in enabled drivers build config 00:02:14.932 mempool/cnxk: not in enabled drivers build config 00:02:14.932 mempool/dpaa: not in enabled drivers build config 00:02:14.932 mempool/dpaa2: not in enabled drivers build config 00:02:14.932 mempool/octeontx: not in enabled drivers build config 00:02:14.932 mempool/stack: not in enabled drivers build config 00:02:14.932 dma/cnxk: not in enabled drivers build config 00:02:14.932 dma/dpaa: not in enabled drivers build config 00:02:14.932 dma/dpaa2: not in enabled drivers build config 00:02:14.932 dma/hisilicon: not in enabled drivers build config 00:02:14.932 dma/idxd: not in enabled drivers build config 00:02:14.932 dma/ioat: not in enabled drivers build config 00:02:14.933 dma/skeleton: not in enabled drivers build config 00:02:14.933 net/af_packet: not in enabled drivers build config 00:02:14.933 net/af_xdp: not in enabled drivers build config 00:02:14.933 net/ark: not in enabled drivers build config 00:02:14.933 net/atlantic: not in enabled drivers build config 00:02:14.933 net/avp: not in enabled drivers build config 00:02:14.933 net/axgbe: not in enabled drivers build config 00:02:14.933 net/bnx2x: not in enabled drivers build config 00:02:14.933 net/bnxt: not in enabled drivers build config 00:02:14.933 net/bonding: not in enabled drivers build config 00:02:14.933 net/cnxk: not in enabled drivers build config 00:02:14.933 net/cxgbe: not in enabled drivers build config 00:02:14.933 net/dpaa: not in enabled drivers build config 00:02:14.933 net/dpaa2: not in enabled drivers build config 00:02:14.933 net/e1000: not in enabled drivers build config 00:02:14.933 net/ena: not in enabled drivers build config 00:02:14.933 net/enetc: not in enabled drivers build config 00:02:14.933 net/enetfec: not in enabled drivers build config 00:02:14.933 net/enic: not in enabled drivers build config 00:02:14.933 net/failsafe: not in enabled drivers build config 00:02:14.933 net/fm10k: not in enabled drivers build config 00:02:14.933 net/gve: not in enabled drivers build config 00:02:14.933 net/hinic: not in enabled drivers build config 00:02:14.933 net/hns3: not in enabled drivers build config 00:02:14.933 net/iavf: not in enabled drivers build config 00:02:14.933 net/ice: not in enabled drivers build config 00:02:14.933 net/idpf: not in enabled drivers build config 00:02:14.933 net/igc: not in enabled drivers build config 00:02:14.933 net/ionic: not in enabled drivers build config 00:02:14.933 net/ipn3ke: not in enabled drivers build config 00:02:14.933 net/ixgbe: not in enabled drivers build config 00:02:14.933 net/kni: not in enabled drivers build config 00:02:14.933 net/liquidio: not in enabled drivers build config 00:02:14.933 net/mana: not in enabled drivers build config 00:02:14.933 net/memif: not in enabled drivers build config 00:02:14.933 net/mlx4: not in enabled drivers build config 00:02:14.933 net/mlx5: not in enabled drivers build config 00:02:14.933 net/mvneta: not in enabled drivers build config 00:02:14.933 net/mvpp2: not in enabled drivers build config 00:02:14.933 net/netvsc: not in enabled drivers build config 00:02:14.933 net/nfb: not in enabled drivers build config 00:02:14.933 net/nfp: not in enabled drivers build config 00:02:14.933 net/ngbe: not in enabled drivers build config 00:02:14.933 net/null: not in enabled drivers build config 00:02:14.933 net/octeontx: not in enabled drivers build config 00:02:14.933 net/octeon_ep: not in enabled drivers build config 00:02:14.933 net/pcap: not in enabled drivers build config 00:02:14.933 net/pfe: not in enabled drivers build config 00:02:14.933 net/qede: not in enabled drivers build config 00:02:14.933 net/ring: not in enabled drivers build config 00:02:14.933 net/sfc: not in enabled drivers build config 00:02:14.933 net/softnic: not in enabled drivers build config 00:02:14.933 net/tap: not in enabled drivers build config 00:02:14.933 net/thunderx: not in enabled drivers build config 00:02:14.933 net/txgbe: not in enabled drivers build config 00:02:14.933 net/vdev_netvsc: not in enabled drivers build config 00:02:14.933 net/vhost: not in enabled drivers build config 00:02:14.933 net/virtio: not in enabled drivers build config 00:02:14.933 net/vmxnet3: not in enabled drivers build config 00:02:14.933 raw/cnxk_bphy: not in enabled drivers build config 00:02:14.933 raw/cnxk_gpio: not in enabled drivers build config 00:02:14.933 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:14.933 raw/ifpga: not in enabled drivers build config 00:02:14.933 raw/ntb: not in enabled drivers build config 00:02:14.933 raw/skeleton: not in enabled drivers build config 00:02:14.933 crypto/armv8: not in enabled drivers build config 00:02:14.933 crypto/bcmfs: not in enabled drivers build config 00:02:14.933 crypto/caam_jr: not in enabled drivers build config 00:02:14.933 crypto/ccp: not in enabled drivers build config 00:02:14.933 crypto/cnxk: not in enabled drivers build config 00:02:14.933 crypto/dpaa_sec: not in enabled drivers build config 00:02:14.933 crypto/dpaa2_sec: not in enabled drivers build config 00:02:14.933 crypto/ipsec_mb: not in enabled drivers build config 00:02:14.933 crypto/mlx5: not in enabled drivers build config 00:02:14.933 crypto/mvsam: not in enabled drivers build config 00:02:14.933 crypto/nitrox: not in enabled drivers build config 00:02:14.933 crypto/null: not in enabled drivers build config 00:02:14.933 crypto/octeontx: not in enabled drivers build config 00:02:14.933 crypto/openssl: not in enabled drivers build config 00:02:14.933 crypto/scheduler: not in enabled drivers build config 00:02:14.933 crypto/uadk: not in enabled drivers build config 00:02:14.933 crypto/virtio: not in enabled drivers build config 00:02:14.933 compress/isal: not in enabled drivers build config 00:02:14.933 compress/mlx5: not in enabled drivers build config 00:02:14.933 compress/octeontx: not in enabled drivers build config 00:02:14.933 compress/zlib: not in enabled drivers build config 00:02:14.933 regex/mlx5: not in enabled drivers build config 00:02:14.933 regex/cn9k: not in enabled drivers build config 00:02:14.933 vdpa/ifc: not in enabled drivers build config 00:02:14.933 vdpa/mlx5: not in enabled drivers build config 00:02:14.933 vdpa/sfc: not in enabled drivers build config 00:02:14.933 event/cnxk: not in enabled drivers build config 00:02:14.933 event/dlb2: not in enabled drivers build config 00:02:14.933 event/dpaa: not in enabled drivers build config 00:02:14.933 event/dpaa2: not in enabled drivers build config 00:02:14.933 event/dsw: not in enabled drivers build config 00:02:14.933 event/opdl: not in enabled drivers build config 00:02:14.933 event/skeleton: not in enabled drivers build config 00:02:14.933 event/sw: not in enabled drivers build config 00:02:14.933 event/octeontx: not in enabled drivers build config 00:02:14.933 baseband/acc: not in enabled drivers build config 00:02:14.933 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:14.933 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:14.933 baseband/la12xx: not in enabled drivers build config 00:02:14.933 baseband/null: not in enabled drivers build config 00:02:14.933 baseband/turbo_sw: not in enabled drivers build config 00:02:14.933 gpu/cuda: not in enabled drivers build config 00:02:14.933 00:02:14.933 00:02:14.933 Build targets in project: 311 00:02:14.933 00:02:14.933 DPDK 22.11.4 00:02:14.933 00:02:14.933 User defined options 00:02:14.933 libdir : lib 00:02:14.933 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:14.933 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:14.933 c_link_args : 00:02:14.933 enable_docs : false 00:02:14.933 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:14.933 enable_kmods : false 00:02:14.933 machine : native 00:02:14.933 tests : false 00:02:14.933 00:02:14.933 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.933 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:15.193 06:33:42 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:15.193 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:15.193 [1/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:15.193 [2/740] Generating lib/rte_kvargs_def with a custom command 00:02:15.193 [3/740] Generating lib/rte_telemetry_def with a custom command 00:02:15.193 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:15.193 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:15.193 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.193 [7/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.193 [8/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:15.193 [9/740] Linking static target lib/librte_kvargs.a 00:02:15.193 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:15.453 [11/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:15.453 [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:15.453 [13/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:15.453 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.453 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:15.453 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:15.453 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:15.453 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:15.453 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:15.453 [20/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.453 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:15.453 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:15.712 [23/740] Linking target lib/librte_kvargs.so.23.0 00:02:15.712 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.712 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:15.712 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:15.712 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:15.712 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:15.712 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.712 [30/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:15.712 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:15.712 [32/740] Linking static target lib/librte_telemetry.a 00:02:15.712 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:15.712 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:15.712 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:15.972 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:15.972 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:15.972 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:15.972 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:15.972 [40/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:15.972 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:15.972 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:16.232 [43/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.232 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:16.232 [45/740] Linking target lib/librte_telemetry.so.23.0 00:02:16.232 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:16.232 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:16.232 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:16.232 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:16.232 [50/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:16.232 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:16.232 [52/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:16.232 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.232 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:16.232 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:16.232 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:16.232 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.232 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:16.232 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:16.232 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:16.491 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:16.491 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:16.491 [63/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:16.491 [64/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:16.491 [65/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:16.491 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:16.491 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.491 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:16.491 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.491 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:16.491 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.491 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.491 [73/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:16.491 [74/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.491 [75/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:16.491 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:16.491 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:16.491 [78/740] Generating lib/rte_eal_def with a custom command 00:02:16.750 [79/740] Generating lib/rte_eal_mingw with a custom command 00:02:16.750 [80/740] Generating lib/rte_ring_def with a custom command 00:02:16.750 [81/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.750 [82/740] Generating lib/rte_ring_mingw with a custom command 00:02:16.750 [83/740] Generating lib/rte_rcu_def with a custom command 00:02:16.750 [84/740] Generating lib/rte_rcu_mingw with a custom command 00:02:16.750 [85/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:16.750 [86/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:16.750 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:16.750 [88/740] Linking static target lib/librte_ring.a 00:02:16.750 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.750 [90/740] Generating lib/rte_mempool_def with a custom command 00:02:16.750 [91/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:16.750 [92/740] Generating lib/rte_mempool_mingw with a custom command 00:02:16.750 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:17.009 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.009 [95/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:17.009 [96/740] Generating lib/rte_mbuf_def with a custom command 00:02:17.009 [97/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.009 [98/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:17.009 [99/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:17.009 [100/740] Linking static target lib/librte_eal.a 00:02:17.268 [101/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:17.268 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:17.268 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:17.528 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:17.528 [105/740] Linking static target lib/librte_rcu.a 00:02:17.528 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.528 [107/740] Linking static target lib/librte_mempool.a 00:02:17.528 [108/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:17.528 [109/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:17.528 [110/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:17.528 [111/740] Generating lib/rte_net_def with a custom command 00:02:17.528 [112/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:17.528 [113/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:17.528 [114/740] Generating lib/rte_net_mingw with a custom command 00:02:17.788 [115/740] Generating lib/rte_meter_def with a custom command 00:02:17.788 [116/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:17.788 [117/740] Generating lib/rte_meter_mingw with a custom command 00:02:17.788 [118/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.788 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:17.788 [120/740] Linking static target lib/librte_meter.a 00:02:17.788 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:17.788 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:17.788 [123/740] Linking static target lib/librte_net.a 00:02:18.065 [124/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.065 [125/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.065 [126/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.065 [127/740] Linking static target lib/librte_mbuf.a 00:02:18.065 [128/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:18.065 [129/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.065 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.333 [131/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.333 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.333 [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.593 [134/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.593 [135/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.593 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.593 [137/740] Generating lib/rte_ethdev_def with a custom command 00:02:18.593 [138/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:18.593 [139/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:18.593 [140/740] Generating lib/rte_pci_def with a custom command 00:02:18.852 [141/740] Generating lib/rte_pci_mingw with a custom command 00:02:18.852 [142/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:18.852 [143/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:18.852 [144/740] Linking static target lib/librte_pci.a 00:02:18.852 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:18.852 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:18.852 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:18.852 [148/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:18.852 [149/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.111 [150/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.111 [151/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.111 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.111 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.111 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:19.111 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:19.111 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:19.111 [157/740] Generating lib/rte_cmdline_def with a custom command 00:02:19.111 [158/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:19.111 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.111 [160/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:19.111 [161/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:19.111 [162/740] Generating lib/rte_metrics_def with a custom command 00:02:19.111 [163/740] Generating lib/rte_metrics_mingw with a custom command 00:02:19.370 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:19.370 [165/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:19.370 [166/740] Generating lib/rte_hash_def with a custom command 00:02:19.370 [167/740] Linking static target lib/librte_cmdline.a 00:02:19.370 [168/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:19.371 [169/740] Generating lib/rte_hash_mingw with a custom command 00:02:19.371 [170/740] Generating lib/rte_timer_def with a custom command 00:02:19.371 [171/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:19.371 [172/740] Generating lib/rte_timer_mingw with a custom command 00:02:19.371 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:19.630 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:19.630 [175/740] Linking static target lib/librte_metrics.a 00:02:19.630 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:19.630 [177/740] Linking static target lib/librte_timer.a 00:02:19.890 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.890 [179/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:20.149 [180/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.149 [181/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.149 [182/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.149 [183/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:20.149 [184/740] Generating lib/rte_acl_def with a custom command 00:02:20.149 [185/740] Generating lib/rte_acl_mingw with a custom command 00:02:20.409 [186/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:20.409 [187/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:20.409 [188/740] Generating lib/rte_bbdev_def with a custom command 00:02:20.409 [189/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:20.409 [190/740] Generating lib/rte_bitratestats_def with a custom command 00:02:20.409 [191/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:20.409 [192/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:20.409 [193/740] Linking static target lib/librte_ethdev.a 00:02:20.978 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:20.978 [195/740] Linking static target lib/librte_bitratestats.a 00:02:20.978 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:20.978 [197/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:20.978 [198/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:20.978 [199/740] Linking static target lib/librte_bbdev.a 00:02:20.978 [200/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.237 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:21.496 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:21.496 [203/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.496 [204/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:21.756 [205/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:21.756 [206/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:21.756 [207/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:21.756 [208/740] Linking static target lib/librte_hash.a 00:02:22.016 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:22.016 [210/740] Generating lib/rte_bpf_def with a custom command 00:02:22.016 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:02:22.276 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:22.276 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:02:22.276 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:22.276 [215/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:22.276 [216/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:22.276 [217/740] Linking static target lib/librte_cfgfile.a 00:02:22.276 [218/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:22.276 [219/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.276 [220/740] Generating lib/rte_compressdev_def with a custom command 00:02:22.276 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:22.549 [222/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:22.549 [223/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:22.549 [224/740] Linking static target lib/librte_bpf.a 00:02:22.549 [225/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.549 [226/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:22.809 [227/740] Generating lib/rte_cryptodev_def with a custom command 00:02:22.809 [228/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:22.809 [229/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:22.809 [230/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:22.809 [231/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.809 [232/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:22.809 [233/740] Linking static target lib/librte_acl.a 00:02:22.809 [234/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:22.809 [235/740] Generating lib/rte_distributor_def with a custom command 00:02:22.809 [236/740] Linking static target lib/librte_compressdev.a 00:02:22.809 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:02:23.068 [238/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:23.068 [239/740] Generating lib/rte_efd_def with a custom command 00:02:23.068 [240/740] Generating lib/rte_efd_mingw with a custom command 00:02:23.068 [241/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.068 [242/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.068 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:23.328 [244/740] Linking target lib/librte_eal.so.23.0 00:02:23.328 [245/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:23.328 [246/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:23.328 [247/740] Linking target lib/librte_ring.so.23.0 00:02:23.328 [248/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:23.588 [249/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:23.588 [250/740] Linking target lib/librte_meter.so.23.0 00:02:23.588 [251/740] Linking target lib/librte_pci.so.23.0 00:02:23.588 [252/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:23.588 [253/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.588 [254/740] Linking target lib/librte_rcu.so.23.0 00:02:23.588 [255/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:23.588 [256/740] Linking target lib/librte_mempool.so.23.0 00:02:23.588 [257/740] Linking target lib/librte_timer.so.23.0 00:02:23.588 [258/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:23.588 [259/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:23.588 [260/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:23.588 [261/740] Linking target lib/librte_acl.so.23.0 00:02:23.588 [262/740] Linking target lib/librte_cfgfile.so.23.0 00:02:23.588 [263/740] Linking static target lib/librte_distributor.a 00:02:23.588 [264/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:23.588 [265/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:23.847 [266/740] Linking target lib/librte_mbuf.so.23.0 00:02:23.847 [267/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:23.847 [268/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:23.847 [269/740] Linking target lib/librte_net.so.23.0 00:02:23.847 [270/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.847 [271/740] Linking target lib/librte_bbdev.so.23.0 00:02:23.847 [272/740] Linking target lib/librte_compressdev.so.23.0 00:02:24.111 [273/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:24.111 [274/740] Linking target lib/librte_distributor.so.23.0 00:02:24.111 [275/740] Linking target lib/librte_cmdline.so.23.0 00:02:24.111 [276/740] Linking target lib/librte_hash.so.23.0 00:02:24.111 [277/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:24.111 [278/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:24.111 [279/740] Linking static target lib/librte_efd.a 00:02:24.111 [280/740] Generating lib/rte_eventdev_def with a custom command 00:02:24.111 [281/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:24.111 [282/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:24.111 [283/740] Generating lib/rte_gpudev_def with a custom command 00:02:24.111 [284/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:24.378 [285/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:24.378 [286/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.378 [287/740] Linking target lib/librte_efd.so.23.0 00:02:24.378 [288/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.378 [289/740] Linking target lib/librte_ethdev.so.23.0 00:02:24.638 [290/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:24.638 [291/740] Linking static target lib/librte_cryptodev.a 00:02:24.638 [292/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:24.638 [293/740] Linking target lib/librte_metrics.so.23.0 00:02:24.638 [294/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:24.897 [295/740] Linking target lib/librte_bitratestats.so.23.0 00:02:24.897 [296/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:24.897 [297/740] Linking static target lib/librte_gpudev.a 00:02:24.897 [298/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:24.897 [299/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:24.897 [300/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:24.897 [301/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:24.897 [302/740] Generating lib/rte_gro_def with a custom command 00:02:24.897 [303/740] Linking target lib/librte_bpf.so.23.0 00:02:24.897 [304/740] Generating lib/rte_gro_mingw with a custom command 00:02:24.897 [305/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:24.897 [306/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:25.156 [307/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:25.156 [308/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:25.414 [309/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:25.414 [310/740] Generating lib/rte_gso_def with a custom command 00:02:25.414 [311/740] Generating lib/rte_gso_mingw with a custom command 00:02:25.414 [312/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:25.414 [313/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:25.414 [314/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:25.414 [315/740] Linking static target lib/librte_gro.a 00:02:25.414 [316/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.414 [317/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:25.414 [318/740] Linking target lib/librte_gpudev.so.23.0 00:02:25.675 [319/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.675 [320/740] Linking target lib/librte_gro.so.23.0 00:02:25.675 [321/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:25.675 [322/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:25.675 [323/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:25.675 [324/740] Linking static target lib/librte_gso.a 00:02:25.675 [325/740] Linking static target lib/librte_eventdev.a 00:02:25.675 [326/740] Generating lib/rte_ip_frag_def with a custom command 00:02:25.675 [327/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:25.934 [328/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:25.934 [329/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.934 [330/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:25.934 [331/740] Linking static target lib/librte_jobstats.a 00:02:25.934 [332/740] Generating lib/rte_jobstats_def with a custom command 00:02:25.934 [333/740] Linking target lib/librte_gso.so.23.0 00:02:25.934 [334/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:25.934 [335/740] Generating lib/rte_latencystats_def with a custom command 00:02:25.934 [336/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:25.934 [337/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:25.934 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:25.934 [339/740] Generating lib/rte_lpm_def with a custom command 00:02:25.934 [340/740] Generating lib/rte_lpm_mingw with a custom command 00:02:26.194 [341/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:26.194 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:26.194 [343/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.194 [344/740] Linking target lib/librte_jobstats.so.23.0 00:02:26.194 [345/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:26.194 [346/740] Linking static target lib/librte_ip_frag.a 00:02:26.453 [347/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:26.453 [348/740] Linking static target lib/librte_latencystats.a 00:02:26.453 [349/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.453 [350/740] Linking target lib/librte_cryptodev.so.23.0 00:02:26.453 [351/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.453 [352/740] Linking target lib/librte_ip_frag.so.23.0 00:02:26.713 [353/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:26.713 [354/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:26.713 [355/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:26.713 [356/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:26.713 [357/740] Generating lib/rte_member_def with a custom command 00:02:26.713 [358/740] Generating lib/rte_member_mingw with a custom command 00:02:26.713 [359/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:26.713 [360/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.713 [361/740] Generating lib/rte_pcapng_def with a custom command 00:02:26.713 [362/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:26.713 [363/740] Linking target lib/librte_latencystats.so.23.0 00:02:26.713 [364/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:26.713 [365/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:26.713 [366/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:26.972 [367/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:26.972 [368/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:26.972 [369/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:26.972 [370/740] Linking static target lib/librte_lpm.a 00:02:27.231 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:27.231 [372/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:27.231 [373/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:27.231 [374/740] Generating lib/rte_power_def with a custom command 00:02:27.231 [375/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:27.231 [376/740] Generating lib/rte_power_mingw with a custom command 00:02:27.231 [377/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:27.231 [378/740] Generating lib/rte_rawdev_def with a custom command 00:02:27.231 [379/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:27.231 [380/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.231 [381/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.231 [382/740] Generating lib/rte_regexdev_def with a custom command 00:02:27.231 [383/740] Linking target lib/librte_lpm.so.23.0 00:02:27.231 [384/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:27.499 [385/740] Linking target lib/librte_eventdev.so.23.0 00:02:27.499 [386/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:27.499 [387/740] Linking static target lib/librte_pcapng.a 00:02:27.499 [388/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:27.499 [389/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:27.499 [390/740] Generating lib/rte_dmadev_def with a custom command 00:02:27.499 [391/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:27.499 [392/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:27.499 [393/740] Generating lib/rte_rib_def with a custom command 00:02:27.499 [394/740] Generating lib/rte_rib_mingw with a custom command 00:02:27.499 [395/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:27.499 [396/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:27.499 [397/740] Linking static target lib/librte_rawdev.a 00:02:27.499 [398/740] Generating lib/rte_reorder_def with a custom command 00:02:27.775 [399/740] Generating lib/rte_reorder_mingw with a custom command 00:02:27.775 [400/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.775 [401/740] Linking target lib/librte_pcapng.so.23.0 00:02:27.775 [402/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:27.775 [403/740] Linking static target lib/librte_dmadev.a 00:02:27.775 [404/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:27.775 [405/740] Linking static target lib/librte_power.a 00:02:27.775 [406/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:27.775 [407/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:27.775 [408/740] Linking static target lib/librte_regexdev.a 00:02:28.035 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:28.035 [410/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.035 [411/740] Linking target lib/librte_rawdev.so.23.0 00:02:28.035 [412/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:28.035 [413/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:28.035 [414/740] Generating lib/rte_sched_def with a custom command 00:02:28.035 [415/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:28.035 [416/740] Generating lib/rte_sched_mingw with a custom command 00:02:28.035 [417/740] Generating lib/rte_security_def with a custom command 00:02:28.035 [418/740] Generating lib/rte_security_mingw with a custom command 00:02:28.035 [419/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:28.294 [420/740] Linking static target lib/librte_member.a 00:02:28.294 [421/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:28.294 [422/740] Linking static target lib/librte_reorder.a 00:02:28.294 [423/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:28.294 [424/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:28.294 [425/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.294 [426/740] Generating lib/rte_stack_def with a custom command 00:02:28.294 [427/740] Linking target lib/librte_dmadev.so.23.0 00:02:28.294 [428/740] Generating lib/rte_stack_mingw with a custom command 00:02:28.294 [429/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:28.294 [430/740] Linking static target lib/librte_stack.a 00:02:28.294 [431/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:28.294 [432/740] Linking static target lib/librte_rib.a 00:02:28.294 [433/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.294 [434/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:28.554 [435/740] Linking target lib/librte_reorder.so.23.0 00:02:28.554 [436/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:28.554 [437/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.554 [438/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.554 [439/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.554 [440/740] Linking target lib/librte_member.so.23.0 00:02:28.554 [441/740] Linking target lib/librte_regexdev.so.23.0 00:02:28.554 [442/740] Linking target lib/librte_stack.so.23.0 00:02:28.554 [443/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.554 [444/740] Linking target lib/librte_power.so.23.0 00:02:28.814 [445/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.814 [446/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:28.814 [447/740] Linking static target lib/librte_security.a 00:02:28.814 [448/740] Linking target lib/librte_rib.so.23.0 00:02:28.814 [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:28.814 [450/740] Generating lib/rte_vhost_def with a custom command 00:02:28.814 [451/740] Generating lib/rte_vhost_mingw with a custom command 00:02:29.074 [452/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:29.074 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:29.074 [454/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.074 [455/740] Linking target lib/librte_security.so.23.0 00:02:29.074 [456/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:29.333 [457/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:29.333 [458/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:29.333 [459/740] Linking static target lib/librte_sched.a 00:02:29.592 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:29.592 [461/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:29.592 [462/740] Generating lib/rte_ipsec_def with a custom command 00:02:29.592 [463/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.592 [464/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:29.592 [465/740] Linking target lib/librte_sched.so.23.0 00:02:29.851 [466/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:29.851 [467/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:29.851 [468/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:29.851 [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:30.110 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:30.110 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:30.110 [472/740] Generating lib/rte_fib_def with a custom command 00:02:30.110 [473/740] Generating lib/rte_fib_mingw with a custom command 00:02:30.110 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:30.369 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:30.369 [476/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:30.629 [477/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:30.629 [478/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:30.629 [479/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:30.629 [480/740] Linking static target lib/librte_ipsec.a 00:02:30.889 [481/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:30.889 [482/740] Linking static target lib/librte_fib.a 00:02:30.889 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:30.889 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:30.889 [485/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.889 [486/740] Linking target lib/librte_ipsec.so.23.0 00:02:30.889 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:31.148 [488/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:31.148 [489/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.148 [490/740] Linking target lib/librte_fib.so.23.0 00:02:31.148 [491/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:31.736 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:31.736 [493/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:31.736 [494/740] Generating lib/rte_port_def with a custom command 00:02:31.736 [495/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:31.736 [496/740] Generating lib/rte_port_mingw with a custom command 00:02:31.736 [497/740] Generating lib/rte_pdump_def with a custom command 00:02:31.736 [498/740] Generating lib/rte_pdump_mingw with a custom command 00:02:31.736 [499/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:31.736 [500/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:31.736 [501/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:31.736 [502/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:31.996 [503/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:31.996 [504/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:31.996 [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:32.259 [506/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:32.259 [507/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:32.259 [508/740] Linking static target lib/librte_port.a 00:02:32.259 [509/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:32.259 [510/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:32.519 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:32.519 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:32.519 [513/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:32.519 [514/740] Linking static target lib/librte_pdump.a 00:02:32.779 [515/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.779 [516/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.779 [517/740] Linking target lib/librte_port.so.23.0 00:02:32.779 [518/740] Linking target lib/librte_pdump.so.23.0 00:02:32.779 [519/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:33.039 [520/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:33.039 [521/740] Generating lib/rte_table_def with a custom command 00:02:33.039 [522/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:33.039 [523/740] Generating lib/rte_table_mingw with a custom command 00:02:33.039 [524/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:33.299 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:33.299 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:33.299 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:33.299 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:33.299 [529/740] Generating lib/rte_pipeline_def with a custom command 00:02:33.559 [530/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:33.559 [531/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:33.559 [532/740] Linking static target lib/librte_table.a 00:02:33.559 [533/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:33.819 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:33.819 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:33.819 [536/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.078 [537/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:34.078 [538/740] Linking target lib/librte_table.so.23.0 00:02:34.078 [539/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:34.078 [540/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:34.078 [541/740] Generating lib/rte_graph_def with a custom command 00:02:34.078 [542/740] Generating lib/rte_graph_mingw with a custom command 00:02:34.338 [543/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:34.338 [544/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:34.338 [545/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:34.597 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:34.597 [547/740] Linking static target lib/librte_graph.a 00:02:34.597 [548/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:34.597 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:34.856 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:34.856 [551/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:34.856 [552/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:34.856 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:35.116 [554/740] Generating lib/rte_node_def with a custom command 00:02:35.116 [555/740] Generating lib/rte_node_mingw with a custom command 00:02:35.116 [556/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.116 [557/740] Linking target lib/librte_graph.so.23.0 00:02:35.116 [558/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:35.374 [559/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:35.374 [560/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:35.374 [561/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:35.374 [562/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:35.374 [563/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:35.374 [564/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:35.374 [565/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:35.374 [566/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:35.374 [567/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:35.634 [568/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:35.634 [569/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:35.634 [570/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:35.634 [571/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:35.634 [572/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:35.634 [573/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:35.634 [574/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:35.634 [575/740] Linking static target lib/librte_node.a 00:02:35.634 [576/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:35.634 [577/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:35.634 [578/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:35.634 [579/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:35.634 [580/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:35.896 [581/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.896 [582/740] Linking target lib/librte_node.so.23.0 00:02:35.896 [583/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:35.896 [584/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.896 [585/740] Linking static target drivers/librte_bus_vdev.a 00:02:35.896 [586/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:35.896 [587/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.896 [588/740] Linking static target drivers/librte_bus_pci.a 00:02:36.158 [589/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.158 [590/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.158 [591/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.158 [592/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:36.158 [593/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:36.158 [594/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:36.158 [595/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.418 [596/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:36.418 [597/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:36.418 [598/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:36.418 [599/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:36.418 [600/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:36.418 [601/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:36.677 [602/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:36.677 [603/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:36.677 [604/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.677 [605/740] Linking static target drivers/librte_mempool_ring.a 00:02:36.677 [606/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.677 [607/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:36.935 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:37.195 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:37.454 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:37.454 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:37.715 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:37.974 [613/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:37.974 [614/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:38.233 [615/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:38.233 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:38.502 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:38.502 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:38.502 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:38.502 [620/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:38.502 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:39.086 [622/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:39.345 [623/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:39.912 [624/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:39.912 [625/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:39.912 [626/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:39.912 [627/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:40.172 [628/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:40.172 [629/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:40.172 [630/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:40.172 [631/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:40.172 [632/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:40.432 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:40.691 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:40.951 [635/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:40.951 [636/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:40.951 [637/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:40.951 [638/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:41.210 [639/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:41.210 [640/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:41.210 [641/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:41.472 [642/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:41.472 [643/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:41.472 [644/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:41.472 [645/740] Linking static target drivers/librte_net_i40e.a 00:02:41.472 [646/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:41.472 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:41.731 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:41.731 [649/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:41.990 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:41.990 [651/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.990 [652/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:42.249 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:42.249 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:42.249 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:42.249 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:42.249 [657/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:42.508 [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:42.508 [659/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:42.766 [660/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:42.766 [661/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:42.766 [662/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:42.766 [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:43.025 [664/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:43.025 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:43.284 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:43.284 [667/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:43.542 [668/740] Linking static target lib/librte_vhost.a 00:02:43.801 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:43.801 [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:44.059 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:44.059 [672/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:44.317 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:44.317 [674/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:44.317 [675/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:44.317 [676/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:44.575 [677/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:44.575 [678/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.575 [679/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:44.834 [680/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:44.834 [681/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:44.834 [682/740] Linking target lib/librte_vhost.so.23.0 00:02:44.834 [683/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:44.834 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:45.093 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:45.352 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:45.352 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:45.352 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:45.352 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:45.352 [690/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:45.611 [691/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:45.611 [692/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:45.870 [693/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:46.127 [694/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:46.127 [695/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:46.127 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:46.386 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:46.386 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:46.645 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:46.903 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:46.903 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:47.162 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:47.420 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:47.420 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:47.420 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:47.678 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:47.937 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:48.195 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:48.195 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:48.195 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:48.453 [711/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:48.711 [712/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:48.711 [713/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:48.711 [714/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:48.711 [715/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:49.006 [716/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:49.006 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:49.264 [718/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:49.523 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:50.456 [720/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:50.456 [721/740] Linking static target lib/librte_pipeline.a 00:02:51.022 [722/740] Linking target app/dpdk-test-cmdline 00:02:51.022 [723/740] Linking target app/dpdk-pdump 00:02:51.022 [724/740] Linking target app/dpdk-proc-info 00:02:51.022 [725/740] Linking target app/dpdk-test-acl 00:02:51.022 [726/740] Linking target app/dpdk-test-bbdev 00:02:51.022 [727/740] Linking target app/dpdk-test-compress-perf 00:02:51.022 [728/740] Linking target app/dpdk-dumpcap 00:02:51.022 [729/740] Linking target app/dpdk-test-crypto-perf 00:02:51.022 [730/740] Linking target app/dpdk-test-eventdev 00:02:51.281 [731/740] Linking target app/dpdk-test-fib 00:02:51.281 [732/740] Linking target app/dpdk-test-flow-perf 00:02:51.281 [733/740] Linking target app/dpdk-test-gpudev 00:02:51.539 [734/740] Linking target app/dpdk-testpmd 00:02:51.539 [735/740] Linking target app/dpdk-test-security-perf 00:02:51.539 [736/740] Linking target app/dpdk-test-pipeline 00:02:51.539 [737/740] Linking target app/dpdk-test-sad 00:02:51.539 [738/740] Linking target app/dpdk-test-regex 00:02:54.824 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.824 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:54.824 06:34:21 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:54.824 06:34:21 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:54.824 06:34:21 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:54.824 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:54.824 [0/1] Installing files. 00:02:54.824 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.824 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:55.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:55.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:55.089 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.089 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:55.090 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:55.090 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:55.090 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.090 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:55.090 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.090 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.090 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.090 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.090 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.090 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.090 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.090 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.090 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.351 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.351 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.351 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.351 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.351 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.351 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.351 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.351 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.352 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.353 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:55.354 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:55.354 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:55.354 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:55.354 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:55.354 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:55.354 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:55.354 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:55.354 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:55.354 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:55.354 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:55.354 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:55.354 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:55.354 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:55.354 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:55.354 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:55.354 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:55.354 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:55.354 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:55.354 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:55.354 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:55.354 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:55.354 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:55.354 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:55.354 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:55.354 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:55.354 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:55.354 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:55.354 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:55.354 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:55.354 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:55.354 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:55.354 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:55.354 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:55.354 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:55.354 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:55.354 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:55.354 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:55.354 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:55.354 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:55.354 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:55.354 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:55.354 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:55.354 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:55.354 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:55.354 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:55.354 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:55.354 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:55.354 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:55.354 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:55.354 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:55.354 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:55.354 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:55.354 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:55.354 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:55.354 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:55.354 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:55.354 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:55.354 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:55.354 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:55.354 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:55.354 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:55.354 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:55.354 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:55.354 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:55.354 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:55.354 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:55.354 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:55.354 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:55.354 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:55.354 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:55.354 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:55.354 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:55.354 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:55.354 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:55.354 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:55.355 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:55.355 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:55.355 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:55.355 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:55.355 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:55.355 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:55.355 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:55.355 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:55.355 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:55.355 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:55.355 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:55.355 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:55.355 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:55.355 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:55.355 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:55.355 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:55.355 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:55.355 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:55.355 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:55.355 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:55.355 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:55.355 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:55.355 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:55.355 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:55.355 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:55.355 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:55.355 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:55.355 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:55.355 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:55.355 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:55.355 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:55.355 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:55.355 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:55.355 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:55.355 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:55.355 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:55.355 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:55.355 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:55.355 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:55.355 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:55.355 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:55.355 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:55.355 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:55.355 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:55.355 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:55.355 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:55.355 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:55.355 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:55.355 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:55.355 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:55.355 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:55.614 06:34:22 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:02:55.614 06:34:22 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:55.614 00:02:55.614 real 0m47.840s 00:02:55.614 user 4m59.585s 00:02:55.614 sys 0m50.578s 00:02:55.614 06:34:22 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:55.614 06:34:22 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:55.614 ************************************ 00:02:55.614 END TEST build_native_dpdk 00:02:55.614 ************************************ 00:02:55.614 06:34:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:55.614 06:34:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:55.614 06:34:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:55.614 06:34:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:55.614 06:34:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:55.614 06:34:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:55.614 06:34:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:55.614 06:34:22 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:55.614 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:55.872 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:55.872 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:55.872 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:56.130 Using 'verbs' RDMA provider 00:03:12.453 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:27.339 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:27.339 Creating mk/config.mk...done. 00:03:27.339 Creating mk/cc.flags.mk...done. 00:03:27.339 Type 'make' to build. 00:03:27.339 06:34:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:27.339 06:34:53 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:27.339 06:34:53 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:27.339 06:34:53 -- common/autotest_common.sh@10 -- $ set +x 00:03:27.339 ************************************ 00:03:27.339 START TEST make 00:03:27.339 ************************************ 00:03:27.339 06:34:53 make -- common/autotest_common.sh@1121 -- $ make -j10 00:03:27.339 make[1]: Nothing to be done for 'all'. 00:03:49.293 CC lib/ut_mock/mock.o 00:03:49.293 CC lib/log/log.o 00:03:49.293 CC lib/log/log_deprecated.o 00:03:49.293 CC lib/log/log_flags.o 00:03:49.293 CC lib/ut/ut.o 00:03:49.293 LIB libspdk_ut_mock.a 00:03:49.293 LIB libspdk_log.a 00:03:49.293 LIB libspdk_ut.a 00:03:49.293 SO libspdk_ut_mock.so.6.0 00:03:49.293 SO libspdk_ut.so.2.0 00:03:49.293 SO libspdk_log.so.7.0 00:03:49.293 SYMLINK libspdk_ut_mock.so 00:03:49.293 SYMLINK libspdk_ut.so 00:03:49.293 SYMLINK libspdk_log.so 00:03:49.293 CC lib/ioat/ioat.o 00:03:49.293 CC lib/util/base64.o 00:03:49.293 CC lib/util/bit_array.o 00:03:49.293 CC lib/util/cpuset.o 00:03:49.293 CC lib/util/crc16.o 00:03:49.293 CC lib/util/crc32.o 00:03:49.293 CC lib/util/crc32c.o 00:03:49.293 CXX lib/trace_parser/trace.o 00:03:49.293 CC lib/dma/dma.o 00:03:49.293 CC lib/util/crc32_ieee.o 00:03:49.293 CC lib/util/crc64.o 00:03:49.293 CC lib/util/dif.o 00:03:49.293 CC lib/vfio_user/host/vfio_user_pci.o 00:03:49.293 CC lib/util/fd.o 00:03:49.293 LIB libspdk_dma.a 00:03:49.293 CC lib/util/fd_group.o 00:03:49.293 CC lib/util/file.o 00:03:49.293 SO libspdk_dma.so.4.0 00:03:49.293 CC lib/util/hexlify.o 00:03:49.293 CC lib/util/iov.o 00:03:49.293 SYMLINK libspdk_dma.so 00:03:49.293 CC lib/util/math.o 00:03:49.293 LIB libspdk_ioat.a 00:03:49.293 CC lib/vfio_user/host/vfio_user.o 00:03:49.293 SO libspdk_ioat.so.7.0 00:03:49.293 CC lib/util/net.o 00:03:49.293 SYMLINK libspdk_ioat.so 00:03:49.293 CC lib/util/pipe.o 00:03:49.293 CC lib/util/strerror_tls.o 00:03:49.293 CC lib/util/string.o 00:03:49.293 CC lib/util/uuid.o 00:03:49.293 CC lib/util/xor.o 00:03:49.294 CC lib/util/zipf.o 00:03:49.294 LIB libspdk_vfio_user.a 00:03:49.294 SO libspdk_vfio_user.so.5.0 00:03:49.294 SYMLINK libspdk_vfio_user.so 00:03:49.294 LIB libspdk_util.a 00:03:49.294 SO libspdk_util.so.10.0 00:03:49.294 SYMLINK libspdk_util.so 00:03:49.294 LIB libspdk_trace_parser.a 00:03:49.294 SO libspdk_trace_parser.so.5.0 00:03:49.294 SYMLINK libspdk_trace_parser.so 00:03:49.294 CC lib/json/json_parse.o 00:03:49.294 CC lib/json/json_util.o 00:03:49.294 CC lib/json/json_write.o 00:03:49.294 CC lib/vmd/vmd.o 00:03:49.294 CC lib/env_dpdk/env.o 00:03:49.294 CC lib/env_dpdk/memory.o 00:03:49.294 CC lib/rdma_provider/common.o 00:03:49.294 CC lib/conf/conf.o 00:03:49.294 CC lib/idxd/idxd.o 00:03:49.294 CC lib/rdma_utils/rdma_utils.o 00:03:49.294 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:49.294 LIB libspdk_conf.a 00:03:49.294 CC lib/idxd/idxd_user.o 00:03:49.294 CC lib/idxd/idxd_kernel.o 00:03:49.294 SO libspdk_conf.so.6.0 00:03:49.294 LIB libspdk_json.a 00:03:49.294 LIB libspdk_rdma_utils.a 00:03:49.294 SYMLINK libspdk_conf.so 00:03:49.294 CC lib/vmd/led.o 00:03:49.294 SO libspdk_json.so.6.0 00:03:49.294 SO libspdk_rdma_utils.so.1.0 00:03:49.294 CC lib/env_dpdk/pci.o 00:03:49.294 LIB libspdk_rdma_provider.a 00:03:49.294 SYMLINK libspdk_json.so 00:03:49.294 SYMLINK libspdk_rdma_utils.so 00:03:49.294 CC lib/env_dpdk/init.o 00:03:49.294 SO libspdk_rdma_provider.so.6.0 00:03:49.294 CC lib/env_dpdk/threads.o 00:03:49.554 SYMLINK libspdk_rdma_provider.so 00:03:49.554 CC lib/env_dpdk/pci_ioat.o 00:03:49.554 CC lib/env_dpdk/pci_virtio.o 00:03:49.554 CC lib/env_dpdk/pci_vmd.o 00:03:49.554 CC lib/jsonrpc/jsonrpc_server.o 00:03:49.554 CC lib/env_dpdk/pci_idxd.o 00:03:49.554 CC lib/env_dpdk/pci_event.o 00:03:49.554 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:49.554 CC lib/env_dpdk/sigbus_handler.o 00:03:49.814 CC lib/env_dpdk/pci_dpdk.o 00:03:49.814 LIB libspdk_idxd.a 00:03:49.814 SO libspdk_idxd.so.12.0 00:03:49.814 LIB libspdk_vmd.a 00:03:49.814 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:49.814 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:49.814 CC lib/jsonrpc/jsonrpc_client.o 00:03:49.814 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:49.814 SO libspdk_vmd.so.6.0 00:03:49.814 SYMLINK libspdk_idxd.so 00:03:49.814 SYMLINK libspdk_vmd.so 00:03:50.074 LIB libspdk_jsonrpc.a 00:03:50.074 SO libspdk_jsonrpc.so.6.0 00:03:50.074 SYMLINK libspdk_jsonrpc.so 00:03:50.642 CC lib/rpc/rpc.o 00:03:50.642 LIB libspdk_env_dpdk.a 00:03:50.901 SO libspdk_env_dpdk.so.15.0 00:03:50.901 LIB libspdk_rpc.a 00:03:50.901 SO libspdk_rpc.so.6.0 00:03:50.901 SYMLINK libspdk_env_dpdk.so 00:03:50.901 SYMLINK libspdk_rpc.so 00:03:51.471 CC lib/keyring/keyring.o 00:03:51.471 CC lib/keyring/keyring_rpc.o 00:03:51.471 CC lib/notify/notify.o 00:03:51.471 CC lib/notify/notify_rpc.o 00:03:51.471 CC lib/trace/trace.o 00:03:51.471 CC lib/trace/trace_flags.o 00:03:51.471 CC lib/trace/trace_rpc.o 00:03:51.471 LIB libspdk_notify.a 00:03:51.471 SO libspdk_notify.so.6.0 00:03:51.471 LIB libspdk_keyring.a 00:03:51.731 SYMLINK libspdk_notify.so 00:03:51.731 SO libspdk_keyring.so.1.0 00:03:51.731 LIB libspdk_trace.a 00:03:51.731 SYMLINK libspdk_keyring.so 00:03:51.731 SO libspdk_trace.so.10.0 00:03:51.731 SYMLINK libspdk_trace.so 00:03:52.300 CC lib/sock/sock.o 00:03:52.300 CC lib/sock/sock_rpc.o 00:03:52.300 CC lib/thread/iobuf.o 00:03:52.300 CC lib/thread/thread.o 00:03:52.560 LIB libspdk_sock.a 00:03:52.818 SO libspdk_sock.so.10.0 00:03:52.818 SYMLINK libspdk_sock.so 00:03:53.078 CC lib/nvme/nvme_ctrlr.o 00:03:53.078 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:53.078 CC lib/nvme/nvme_ns_cmd.o 00:03:53.078 CC lib/nvme/nvme_fabric.o 00:03:53.078 CC lib/nvme/nvme_ns.o 00:03:53.078 CC lib/nvme/nvme_pcie.o 00:03:53.078 CC lib/nvme/nvme_pcie_common.o 00:03:53.078 CC lib/nvme/nvme_qpair.o 00:03:53.078 CC lib/nvme/nvme.o 00:03:54.017 CC lib/nvme/nvme_quirks.o 00:03:54.017 CC lib/nvme/nvme_transport.o 00:03:54.017 CC lib/nvme/nvme_discovery.o 00:03:54.017 LIB libspdk_thread.a 00:03:54.017 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:54.017 SO libspdk_thread.so.10.1 00:03:54.017 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:54.017 CC lib/nvme/nvme_tcp.o 00:03:54.277 SYMLINK libspdk_thread.so 00:03:54.277 CC lib/nvme/nvme_opal.o 00:03:54.536 CC lib/nvme/nvme_io_msg.o 00:03:54.536 CC lib/nvme/nvme_poll_group.o 00:03:54.536 CC lib/nvme/nvme_zns.o 00:03:54.796 CC lib/nvme/nvme_stubs.o 00:03:54.796 CC lib/nvme/nvme_auth.o 00:03:54.796 CC lib/accel/accel.o 00:03:54.796 CC lib/nvme/nvme_cuse.o 00:03:54.796 CC lib/nvme/nvme_rdma.o 00:03:55.365 CC lib/blob/blobstore.o 00:03:55.365 CC lib/accel/accel_rpc.o 00:03:55.365 CC lib/accel/accel_sw.o 00:03:55.365 CC lib/init/json_config.o 00:03:55.365 CC lib/init/subsystem.o 00:03:55.625 CC lib/init/subsystem_rpc.o 00:03:55.625 CC lib/blob/request.o 00:03:55.625 CC lib/init/rpc.o 00:03:55.885 CC lib/blob/zeroes.o 00:03:55.885 CC lib/virtio/virtio.o 00:03:55.885 LIB libspdk_init.a 00:03:55.885 CC lib/blob/blob_bs_dev.o 00:03:55.885 SO libspdk_init.so.5.0 00:03:55.885 CC lib/virtio/virtio_vhost_user.o 00:03:56.144 CC lib/fsdev/fsdev.o 00:03:56.144 SYMLINK libspdk_init.so 00:03:56.144 CC lib/fsdev/fsdev_io.o 00:03:56.144 LIB libspdk_accel.a 00:03:56.144 CC lib/fsdev/fsdev_rpc.o 00:03:56.144 SO libspdk_accel.so.16.0 00:03:56.144 CC lib/virtio/virtio_vfio_user.o 00:03:56.144 CC lib/virtio/virtio_pci.o 00:03:56.144 SYMLINK libspdk_accel.so 00:03:56.404 CC lib/event/reactor.o 00:03:56.404 CC lib/event/app.o 00:03:56.404 CC lib/bdev/bdev.o 00:03:56.404 CC lib/bdev/bdev_rpc.o 00:03:56.404 CC lib/bdev/bdev_zone.o 00:03:56.663 CC lib/bdev/part.o 00:03:56.663 LIB libspdk_virtio.a 00:03:56.663 SO libspdk_virtio.so.7.0 00:03:56.663 LIB libspdk_nvme.a 00:03:56.663 CC lib/event/log_rpc.o 00:03:56.663 SYMLINK libspdk_virtio.so 00:03:56.663 CC lib/event/app_rpc.o 00:03:56.922 CC lib/event/scheduler_static.o 00:03:56.922 LIB libspdk_fsdev.a 00:03:56.922 SO libspdk_nvme.so.13.1 00:03:56.922 SO libspdk_fsdev.so.1.0 00:03:56.922 CC lib/bdev/scsi_nvme.o 00:03:56.922 SYMLINK libspdk_fsdev.so 00:03:57.182 LIB libspdk_event.a 00:03:57.182 SO libspdk_event.so.14.0 00:03:57.182 SYMLINK libspdk_nvme.so 00:03:57.182 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:57.182 SYMLINK libspdk_event.so 00:03:58.121 LIB libspdk_fuse_dispatcher.a 00:03:58.121 SO libspdk_fuse_dispatcher.so.1.0 00:03:58.121 SYMLINK libspdk_fuse_dispatcher.so 00:03:59.504 LIB libspdk_blob.a 00:03:59.504 SO libspdk_blob.so.11.0 00:03:59.504 SYMLINK libspdk_blob.so 00:03:59.764 LIB libspdk_bdev.a 00:03:59.764 SO libspdk_bdev.so.16.0 00:03:59.764 CC lib/lvol/lvol.o 00:04:00.024 CC lib/blobfs/blobfs.o 00:04:00.024 CC lib/blobfs/tree.o 00:04:00.024 SYMLINK libspdk_bdev.so 00:04:00.024 CC lib/nvmf/ctrlr_discovery.o 00:04:00.024 CC lib/nvmf/ctrlr.o 00:04:00.024 CC lib/nvmf/ctrlr_bdev.o 00:04:00.285 CC lib/nvmf/subsystem.o 00:04:00.285 CC lib/ftl/ftl_core.o 00:04:00.285 CC lib/scsi/dev.o 00:04:00.285 CC lib/nbd/nbd.o 00:04:00.285 CC lib/ublk/ublk.o 00:04:00.544 CC lib/scsi/lun.o 00:04:00.544 CC lib/ftl/ftl_init.o 00:04:00.544 CC lib/nbd/nbd_rpc.o 00:04:00.804 CC lib/ftl/ftl_layout.o 00:04:00.804 CC lib/scsi/port.o 00:04:00.804 LIB libspdk_nbd.a 00:04:00.804 SO libspdk_nbd.so.7.0 00:04:00.804 CC lib/ublk/ublk_rpc.o 00:04:01.064 LIB libspdk_blobfs.a 00:04:01.064 SYMLINK libspdk_nbd.so 00:04:01.064 CC lib/scsi/scsi.o 00:04:01.064 SO libspdk_blobfs.so.10.0 00:04:01.064 CC lib/scsi/scsi_bdev.o 00:04:01.064 CC lib/scsi/scsi_pr.o 00:04:01.064 CC lib/scsi/scsi_rpc.o 00:04:01.064 SYMLINK libspdk_blobfs.so 00:04:01.064 LIB libspdk_lvol.a 00:04:01.064 CC lib/ftl/ftl_debug.o 00:04:01.064 SO libspdk_lvol.so.10.0 00:04:01.064 LIB libspdk_ublk.a 00:04:01.064 CC lib/nvmf/nvmf.o 00:04:01.064 CC lib/scsi/task.o 00:04:01.064 SO libspdk_ublk.so.3.0 00:04:01.064 SYMLINK libspdk_lvol.so 00:04:01.064 CC lib/ftl/ftl_io.o 00:04:01.064 CC lib/ftl/ftl_sb.o 00:04:01.324 SYMLINK libspdk_ublk.so 00:04:01.324 CC lib/ftl/ftl_l2p.o 00:04:01.324 CC lib/ftl/ftl_l2p_flat.o 00:04:01.324 CC lib/ftl/ftl_nv_cache.o 00:04:01.324 CC lib/ftl/ftl_band.o 00:04:01.324 CC lib/ftl/ftl_band_ops.o 00:04:01.324 CC lib/ftl/ftl_writer.o 00:04:01.324 CC lib/ftl/ftl_rq.o 00:04:01.584 CC lib/ftl/ftl_reloc.o 00:04:01.584 LIB libspdk_scsi.a 00:04:01.584 CC lib/ftl/ftl_l2p_cache.o 00:04:01.584 SO libspdk_scsi.so.9.0 00:04:01.584 CC lib/nvmf/nvmf_rpc.o 00:04:01.844 CC lib/nvmf/transport.o 00:04:01.844 SYMLINK libspdk_scsi.so 00:04:01.844 CC lib/nvmf/tcp.o 00:04:01.844 CC lib/ftl/ftl_p2l.o 00:04:01.844 CC lib/ftl/mngt/ftl_mngt.o 00:04:01.844 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:02.104 CC lib/nvmf/stubs.o 00:04:02.104 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:02.104 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:02.396 CC lib/iscsi/conn.o 00:04:02.396 CC lib/nvmf/mdns_server.o 00:04:02.396 CC lib/nvmf/rdma.o 00:04:02.396 CC lib/vhost/vhost.o 00:04:02.682 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:02.682 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:02.682 CC lib/iscsi/init_grp.o 00:04:02.682 CC lib/iscsi/iscsi.o 00:04:02.968 CC lib/iscsi/md5.o 00:04:02.968 CC lib/iscsi/param.o 00:04:02.968 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:02.968 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:03.242 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:03.242 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:03.242 CC lib/nvmf/auth.o 00:04:03.242 CC lib/vhost/vhost_rpc.o 00:04:03.242 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:03.242 CC lib/iscsi/portal_grp.o 00:04:03.501 CC lib/vhost/vhost_scsi.o 00:04:03.501 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:03.501 CC lib/iscsi/tgt_node.o 00:04:03.501 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:03.760 CC lib/iscsi/iscsi_subsystem.o 00:04:03.760 CC lib/iscsi/iscsi_rpc.o 00:04:03.760 CC lib/iscsi/task.o 00:04:03.760 CC lib/vhost/vhost_blk.o 00:04:04.019 CC lib/ftl/utils/ftl_conf.o 00:04:04.019 CC lib/ftl/utils/ftl_md.o 00:04:04.019 CC lib/ftl/utils/ftl_mempool.o 00:04:04.019 CC lib/vhost/rte_vhost_user.o 00:04:04.277 CC lib/ftl/utils/ftl_bitmap.o 00:04:04.277 CC lib/ftl/utils/ftl_property.o 00:04:04.277 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:04.277 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:04.277 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:04.277 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:04.537 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:04.537 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:04.537 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:04.537 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:04.537 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:04.537 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:04.537 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:04.796 CC lib/ftl/base/ftl_base_dev.o 00:04:04.796 CC lib/ftl/base/ftl_base_bdev.o 00:04:04.796 CC lib/ftl/ftl_trace.o 00:04:05.054 LIB libspdk_ftl.a 00:04:05.314 LIB libspdk_nvmf.a 00:04:05.314 LIB libspdk_vhost.a 00:04:05.314 SO libspdk_ftl.so.9.0 00:04:05.314 SO libspdk_nvmf.so.19.0 00:04:05.314 SO libspdk_vhost.so.8.0 00:04:05.574 LIB libspdk_iscsi.a 00:04:05.574 SYMLINK libspdk_vhost.so 00:04:05.574 SO libspdk_iscsi.so.8.0 00:04:05.574 SYMLINK libspdk_ftl.so 00:04:05.574 SYMLINK libspdk_nvmf.so 00:04:05.834 SYMLINK libspdk_iscsi.so 00:04:06.093 CC module/env_dpdk/env_dpdk_rpc.o 00:04:06.093 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:06.093 CC module/blob/bdev/blob_bdev.o 00:04:06.093 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:06.093 CC module/keyring/file/keyring.o 00:04:06.093 CC module/scheduler/gscheduler/gscheduler.o 00:04:06.093 CC module/keyring/linux/keyring.o 00:04:06.093 CC module/sock/posix/posix.o 00:04:06.093 CC module/fsdev/aio/fsdev_aio.o 00:04:06.093 CC module/accel/error/accel_error.o 00:04:06.352 LIB libspdk_env_dpdk_rpc.a 00:04:06.352 SO libspdk_env_dpdk_rpc.so.6.0 00:04:06.352 CC module/keyring/file/keyring_rpc.o 00:04:06.352 LIB libspdk_scheduler_dpdk_governor.a 00:04:06.352 SYMLINK libspdk_env_dpdk_rpc.so 00:04:06.352 CC module/keyring/linux/keyring_rpc.o 00:04:06.352 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:06.352 LIB libspdk_scheduler_gscheduler.a 00:04:06.352 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:06.352 SO libspdk_scheduler_gscheduler.so.4.0 00:04:06.352 LIB libspdk_scheduler_dynamic.a 00:04:06.352 SO libspdk_scheduler_dynamic.so.4.0 00:04:06.352 CC module/accel/error/accel_error_rpc.o 00:04:06.352 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:06.352 SYMLINK libspdk_scheduler_gscheduler.so 00:04:06.352 SYMLINK libspdk_scheduler_dynamic.so 00:04:06.352 LIB libspdk_keyring_file.a 00:04:06.352 LIB libspdk_blob_bdev.a 00:04:06.352 LIB libspdk_keyring_linux.a 00:04:06.352 CC module/fsdev/aio/linux_aio_mgr.o 00:04:06.612 SO libspdk_blob_bdev.so.11.0 00:04:06.612 SO libspdk_keyring_file.so.1.0 00:04:06.612 SO libspdk_keyring_linux.so.1.0 00:04:06.612 LIB libspdk_accel_error.a 00:04:06.612 SYMLINK libspdk_blob_bdev.so 00:04:06.612 SYMLINK libspdk_keyring_file.so 00:04:06.612 SO libspdk_accel_error.so.2.0 00:04:06.612 SYMLINK libspdk_keyring_linux.so 00:04:06.612 CC module/accel/dsa/accel_dsa.o 00:04:06.612 CC module/accel/dsa/accel_dsa_rpc.o 00:04:06.612 CC module/accel/ioat/accel_ioat.o 00:04:06.612 CC module/accel/iaa/accel_iaa.o 00:04:06.612 SYMLINK libspdk_accel_error.so 00:04:06.612 CC module/accel/iaa/accel_iaa_rpc.o 00:04:06.871 CC module/accel/ioat/accel_ioat_rpc.o 00:04:06.871 CC module/bdev/delay/vbdev_delay.o 00:04:06.871 CC module/blobfs/bdev/blobfs_bdev.o 00:04:06.871 LIB libspdk_accel_iaa.a 00:04:06.871 CC module/bdev/error/vbdev_error.o 00:04:06.871 SO libspdk_accel_iaa.so.3.0 00:04:06.871 LIB libspdk_accel_dsa.a 00:04:06.871 CC module/bdev/gpt/gpt.o 00:04:06.871 SO libspdk_accel_dsa.so.5.0 00:04:06.871 LIB libspdk_accel_ioat.a 00:04:06.871 SYMLINK libspdk_accel_iaa.so 00:04:06.871 CC module/bdev/gpt/vbdev_gpt.o 00:04:06.871 CC module/bdev/lvol/vbdev_lvol.o 00:04:06.871 SO libspdk_accel_ioat.so.6.0 00:04:06.871 LIB libspdk_fsdev_aio.a 00:04:06.871 SYMLINK libspdk_accel_dsa.so 00:04:07.129 SO libspdk_fsdev_aio.so.1.0 00:04:07.129 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:07.129 SYMLINK libspdk_accel_ioat.so 00:04:07.129 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:07.129 LIB libspdk_sock_posix.a 00:04:07.129 SYMLINK libspdk_fsdev_aio.so 00:04:07.129 SO libspdk_sock_posix.so.6.0 00:04:07.129 CC module/bdev/error/vbdev_error_rpc.o 00:04:07.129 CC module/bdev/malloc/bdev_malloc.o 00:04:07.129 SYMLINK libspdk_sock_posix.so 00:04:07.129 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:07.129 LIB libspdk_blobfs_bdev.a 00:04:07.129 CC module/bdev/null/bdev_null.o 00:04:07.129 LIB libspdk_bdev_gpt.a 00:04:07.129 SO libspdk_blobfs_bdev.so.6.0 00:04:07.387 SO libspdk_bdev_gpt.so.6.0 00:04:07.387 CC module/bdev/nvme/bdev_nvme.o 00:04:07.387 SYMLINK libspdk_blobfs_bdev.so 00:04:07.387 LIB libspdk_bdev_error.a 00:04:07.387 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:07.387 SYMLINK libspdk_bdev_gpt.so 00:04:07.387 CC module/bdev/nvme/nvme_rpc.o 00:04:07.387 SO libspdk_bdev_error.so.6.0 00:04:07.387 LIB libspdk_bdev_delay.a 00:04:07.387 CC module/bdev/passthru/vbdev_passthru.o 00:04:07.387 SO libspdk_bdev_delay.so.6.0 00:04:07.387 SYMLINK libspdk_bdev_error.so 00:04:07.387 SYMLINK libspdk_bdev_delay.so 00:04:07.647 LIB libspdk_bdev_lvol.a 00:04:07.647 CC module/bdev/null/bdev_null_rpc.o 00:04:07.647 SO libspdk_bdev_lvol.so.6.0 00:04:07.647 CC module/bdev/raid/bdev_raid.o 00:04:07.647 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:07.647 CC module/bdev/split/vbdev_split.o 00:04:07.647 CC module/bdev/nvme/bdev_mdns_client.o 00:04:07.647 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:07.647 SYMLINK libspdk_bdev_lvol.so 00:04:07.647 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:07.647 LIB libspdk_bdev_null.a 00:04:07.647 SO libspdk_bdev_null.so.6.0 00:04:07.647 LIB libspdk_bdev_malloc.a 00:04:07.647 CC module/bdev/raid/bdev_raid_rpc.o 00:04:07.906 SO libspdk_bdev_malloc.so.6.0 00:04:07.906 SYMLINK libspdk_bdev_null.so 00:04:07.906 CC module/bdev/aio/bdev_aio.o 00:04:07.906 CC module/bdev/split/vbdev_split_rpc.o 00:04:07.906 SYMLINK libspdk_bdev_malloc.so 00:04:07.906 CC module/bdev/raid/bdev_raid_sb.o 00:04:07.906 LIB libspdk_bdev_passthru.a 00:04:07.906 SO libspdk_bdev_passthru.so.6.0 00:04:07.906 CC module/bdev/ftl/bdev_ftl.o 00:04:07.906 SYMLINK libspdk_bdev_passthru.so 00:04:07.906 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:07.906 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:07.906 LIB libspdk_bdev_split.a 00:04:07.906 CC module/bdev/nvme/vbdev_opal.o 00:04:07.906 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:07.906 SO libspdk_bdev_split.so.6.0 00:04:08.165 SYMLINK libspdk_bdev_split.so 00:04:08.166 CC module/bdev/raid/raid0.o 00:04:08.166 LIB libspdk_bdev_zone_block.a 00:04:08.166 CC module/bdev/aio/bdev_aio_rpc.o 00:04:08.166 CC module/bdev/raid/raid1.o 00:04:08.166 SO libspdk_bdev_zone_block.so.6.0 00:04:08.166 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:08.166 LIB libspdk_bdev_ftl.a 00:04:08.166 CC module/bdev/iscsi/bdev_iscsi.o 00:04:08.166 SYMLINK libspdk_bdev_zone_block.so 00:04:08.166 CC module/bdev/raid/concat.o 00:04:08.166 SO libspdk_bdev_ftl.so.6.0 00:04:08.426 LIB libspdk_bdev_aio.a 00:04:08.426 SYMLINK libspdk_bdev_ftl.so 00:04:08.426 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:08.426 SO libspdk_bdev_aio.so.6.0 00:04:08.426 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:08.426 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:08.426 CC module/bdev/raid/raid5f.o 00:04:08.426 SYMLINK libspdk_bdev_aio.so 00:04:08.426 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:08.684 LIB libspdk_bdev_iscsi.a 00:04:08.684 SO libspdk_bdev_iscsi.so.6.0 00:04:08.684 SYMLINK libspdk_bdev_iscsi.so 00:04:08.966 LIB libspdk_bdev_raid.a 00:04:08.966 LIB libspdk_bdev_virtio.a 00:04:08.966 SO libspdk_bdev_raid.so.6.0 00:04:08.966 SO libspdk_bdev_virtio.so.6.0 00:04:09.225 SYMLINK libspdk_bdev_virtio.so 00:04:09.225 SYMLINK libspdk_bdev_raid.so 00:04:09.790 LIB libspdk_bdev_nvme.a 00:04:09.790 SO libspdk_bdev_nvme.so.7.0 00:04:10.049 SYMLINK libspdk_bdev_nvme.so 00:04:10.615 CC module/event/subsystems/vmd/vmd.o 00:04:10.615 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:10.615 CC module/event/subsystems/fsdev/fsdev.o 00:04:10.615 CC module/event/subsystems/iobuf/iobuf.o 00:04:10.615 CC module/event/subsystems/keyring/keyring.o 00:04:10.615 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:10.615 CC module/event/subsystems/scheduler/scheduler.o 00:04:10.615 CC module/event/subsystems/sock/sock.o 00:04:10.615 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:10.615 LIB libspdk_event_keyring.a 00:04:10.615 LIB libspdk_event_vmd.a 00:04:10.615 LIB libspdk_event_fsdev.a 00:04:10.615 LIB libspdk_event_scheduler.a 00:04:10.615 LIB libspdk_event_iobuf.a 00:04:10.615 SO libspdk_event_keyring.so.1.0 00:04:10.615 SO libspdk_event_fsdev.so.1.0 00:04:10.615 SO libspdk_event_vmd.so.6.0 00:04:10.615 SO libspdk_event_scheduler.so.4.0 00:04:10.615 LIB libspdk_event_sock.a 00:04:10.615 LIB libspdk_event_vhost_blk.a 00:04:10.874 SO libspdk_event_vhost_blk.so.3.0 00:04:10.874 SO libspdk_event_sock.so.5.0 00:04:10.874 SO libspdk_event_iobuf.so.3.0 00:04:10.874 SYMLINK libspdk_event_keyring.so 00:04:10.874 SYMLINK libspdk_event_fsdev.so 00:04:10.874 SYMLINK libspdk_event_vmd.so 00:04:10.874 SYMLINK libspdk_event_scheduler.so 00:04:10.874 SYMLINK libspdk_event_sock.so 00:04:10.874 SYMLINK libspdk_event_vhost_blk.so 00:04:10.874 SYMLINK libspdk_event_iobuf.so 00:04:11.132 CC module/event/subsystems/accel/accel.o 00:04:11.391 LIB libspdk_event_accel.a 00:04:11.391 SO libspdk_event_accel.so.6.0 00:04:11.391 SYMLINK libspdk_event_accel.so 00:04:11.958 CC module/event/subsystems/bdev/bdev.o 00:04:11.958 LIB libspdk_event_bdev.a 00:04:11.958 SO libspdk_event_bdev.so.6.0 00:04:12.216 SYMLINK libspdk_event_bdev.so 00:04:12.475 CC module/event/subsystems/nbd/nbd.o 00:04:12.475 CC module/event/subsystems/ublk/ublk.o 00:04:12.475 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:12.475 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:12.475 CC module/event/subsystems/scsi/scsi.o 00:04:12.733 LIB libspdk_event_nbd.a 00:04:12.733 LIB libspdk_event_ublk.a 00:04:12.733 LIB libspdk_event_scsi.a 00:04:12.733 SO libspdk_event_nbd.so.6.0 00:04:12.733 SO libspdk_event_ublk.so.3.0 00:04:12.733 SO libspdk_event_scsi.so.6.0 00:04:12.733 SYMLINK libspdk_event_ublk.so 00:04:12.733 LIB libspdk_event_nvmf.a 00:04:12.733 SYMLINK libspdk_event_scsi.so 00:04:12.734 SYMLINK libspdk_event_nbd.so 00:04:12.734 SO libspdk_event_nvmf.so.6.0 00:04:12.734 SYMLINK libspdk_event_nvmf.so 00:04:13.062 CC module/event/subsystems/iscsi/iscsi.o 00:04:13.062 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:13.322 LIB libspdk_event_vhost_scsi.a 00:04:13.322 LIB libspdk_event_iscsi.a 00:04:13.322 SO libspdk_event_vhost_scsi.so.3.0 00:04:13.322 SO libspdk_event_iscsi.so.6.0 00:04:13.322 SYMLINK libspdk_event_vhost_scsi.so 00:04:13.322 SYMLINK libspdk_event_iscsi.so 00:04:13.581 SO libspdk.so.6.0 00:04:13.581 SYMLINK libspdk.so 00:04:13.840 CC app/trace_record/trace_record.o 00:04:13.840 CC app/spdk_nvme_identify/identify.o 00:04:13.840 CXX app/trace/trace.o 00:04:13.840 CC app/spdk_lspci/spdk_lspci.o 00:04:13.840 CC app/spdk_nvme_perf/perf.o 00:04:13.840 CC app/iscsi_tgt/iscsi_tgt.o 00:04:13.840 CC app/nvmf_tgt/nvmf_main.o 00:04:13.840 CC app/spdk_tgt/spdk_tgt.o 00:04:14.099 CC test/thread/poller_perf/poller_perf.o 00:04:14.099 CC examples/util/zipf/zipf.o 00:04:14.099 LINK spdk_lspci 00:04:14.099 LINK nvmf_tgt 00:04:14.099 LINK poller_perf 00:04:14.099 LINK iscsi_tgt 00:04:14.099 LINK spdk_trace_record 00:04:14.099 LINK zipf 00:04:14.358 LINK spdk_tgt 00:04:14.358 LINK spdk_trace 00:04:14.358 CC examples/ioat/perf/perf.o 00:04:14.358 CC examples/ioat/verify/verify.o 00:04:14.617 CC app/spdk_nvme_discover/discovery_aer.o 00:04:14.617 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:14.617 CC test/dma/test_dma/test_dma.o 00:04:14.617 CC test/app/bdev_svc/bdev_svc.o 00:04:14.617 CC app/spdk_top/spdk_top.o 00:04:14.617 LINK ioat_perf 00:04:14.617 CC app/spdk_dd/spdk_dd.o 00:04:14.617 LINK spdk_nvme_discover 00:04:14.617 LINK verify 00:04:14.876 LINK bdev_svc 00:04:14.876 LINK interrupt_tgt 00:04:14.876 LINK spdk_nvme_identify 00:04:14.876 TEST_HEADER include/spdk/accel.h 00:04:14.876 TEST_HEADER include/spdk/accel_module.h 00:04:15.135 TEST_HEADER include/spdk/assert.h 00:04:15.135 TEST_HEADER include/spdk/barrier.h 00:04:15.135 TEST_HEADER include/spdk/base64.h 00:04:15.135 TEST_HEADER include/spdk/bdev.h 00:04:15.135 TEST_HEADER include/spdk/bdev_module.h 00:04:15.135 TEST_HEADER include/spdk/bdev_zone.h 00:04:15.135 TEST_HEADER include/spdk/bit_array.h 00:04:15.135 LINK test_dma 00:04:15.135 TEST_HEADER include/spdk/bit_pool.h 00:04:15.135 TEST_HEADER include/spdk/blob_bdev.h 00:04:15.135 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:15.135 TEST_HEADER include/spdk/blobfs.h 00:04:15.135 TEST_HEADER include/spdk/blob.h 00:04:15.135 TEST_HEADER include/spdk/conf.h 00:04:15.135 CC app/vhost/vhost.o 00:04:15.135 TEST_HEADER include/spdk/config.h 00:04:15.135 TEST_HEADER include/spdk/cpuset.h 00:04:15.135 TEST_HEADER include/spdk/crc16.h 00:04:15.135 TEST_HEADER include/spdk/crc32.h 00:04:15.135 TEST_HEADER include/spdk/crc64.h 00:04:15.135 TEST_HEADER include/spdk/dif.h 00:04:15.135 TEST_HEADER include/spdk/dma.h 00:04:15.135 TEST_HEADER include/spdk/endian.h 00:04:15.136 TEST_HEADER include/spdk/env_dpdk.h 00:04:15.136 TEST_HEADER include/spdk/env.h 00:04:15.136 LINK spdk_nvme_perf 00:04:15.136 TEST_HEADER include/spdk/event.h 00:04:15.136 TEST_HEADER include/spdk/fd_group.h 00:04:15.136 TEST_HEADER include/spdk/fd.h 00:04:15.136 TEST_HEADER include/spdk/file.h 00:04:15.136 TEST_HEADER include/spdk/fsdev.h 00:04:15.136 TEST_HEADER include/spdk/fsdev_module.h 00:04:15.136 TEST_HEADER include/spdk/ftl.h 00:04:15.136 CC app/fio/nvme/fio_plugin.o 00:04:15.136 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:15.136 TEST_HEADER include/spdk/gpt_spec.h 00:04:15.136 TEST_HEADER include/spdk/hexlify.h 00:04:15.136 TEST_HEADER include/spdk/histogram_data.h 00:04:15.136 TEST_HEADER include/spdk/idxd.h 00:04:15.136 TEST_HEADER include/spdk/idxd_spec.h 00:04:15.136 LINK spdk_dd 00:04:15.136 TEST_HEADER include/spdk/init.h 00:04:15.136 TEST_HEADER include/spdk/ioat.h 00:04:15.136 TEST_HEADER include/spdk/ioat_spec.h 00:04:15.136 TEST_HEADER include/spdk/iscsi_spec.h 00:04:15.136 TEST_HEADER include/spdk/json.h 00:04:15.136 TEST_HEADER include/spdk/jsonrpc.h 00:04:15.136 TEST_HEADER include/spdk/keyring.h 00:04:15.136 TEST_HEADER include/spdk/keyring_module.h 00:04:15.136 TEST_HEADER include/spdk/likely.h 00:04:15.136 TEST_HEADER include/spdk/log.h 00:04:15.136 TEST_HEADER include/spdk/lvol.h 00:04:15.136 TEST_HEADER include/spdk/memory.h 00:04:15.136 TEST_HEADER include/spdk/mmio.h 00:04:15.136 TEST_HEADER include/spdk/nbd.h 00:04:15.136 TEST_HEADER include/spdk/net.h 00:04:15.136 TEST_HEADER include/spdk/notify.h 00:04:15.136 TEST_HEADER include/spdk/nvme.h 00:04:15.136 TEST_HEADER include/spdk/nvme_intel.h 00:04:15.136 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:15.136 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:15.136 TEST_HEADER include/spdk/nvme_spec.h 00:04:15.136 TEST_HEADER include/spdk/nvme_zns.h 00:04:15.136 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:15.136 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:15.136 TEST_HEADER include/spdk/nvmf.h 00:04:15.136 TEST_HEADER include/spdk/nvmf_spec.h 00:04:15.136 TEST_HEADER include/spdk/nvmf_transport.h 00:04:15.136 TEST_HEADER include/spdk/opal.h 00:04:15.136 TEST_HEADER include/spdk/opal_spec.h 00:04:15.136 TEST_HEADER include/spdk/pci_ids.h 00:04:15.136 TEST_HEADER include/spdk/pipe.h 00:04:15.136 TEST_HEADER include/spdk/queue.h 00:04:15.136 TEST_HEADER include/spdk/reduce.h 00:04:15.136 TEST_HEADER include/spdk/rpc.h 00:04:15.136 TEST_HEADER include/spdk/scheduler.h 00:04:15.136 TEST_HEADER include/spdk/scsi.h 00:04:15.136 TEST_HEADER include/spdk/scsi_spec.h 00:04:15.136 TEST_HEADER include/spdk/sock.h 00:04:15.136 TEST_HEADER include/spdk/stdinc.h 00:04:15.136 TEST_HEADER include/spdk/string.h 00:04:15.136 TEST_HEADER include/spdk/thread.h 00:04:15.136 TEST_HEADER include/spdk/trace.h 00:04:15.136 TEST_HEADER include/spdk/trace_parser.h 00:04:15.136 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:15.136 TEST_HEADER include/spdk/tree.h 00:04:15.136 TEST_HEADER include/spdk/ublk.h 00:04:15.136 TEST_HEADER include/spdk/util.h 00:04:15.136 TEST_HEADER include/spdk/uuid.h 00:04:15.136 TEST_HEADER include/spdk/version.h 00:04:15.394 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:15.394 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:15.394 TEST_HEADER include/spdk/vhost.h 00:04:15.394 TEST_HEADER include/spdk/vmd.h 00:04:15.394 TEST_HEADER include/spdk/xor.h 00:04:15.394 LINK vhost 00:04:15.394 TEST_HEADER include/spdk/zipf.h 00:04:15.394 CXX test/cpp_headers/accel.o 00:04:15.394 CC test/env/vtophys/vtophys.o 00:04:15.394 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:15.394 CC examples/thread/thread/thread_ex.o 00:04:15.394 CC test/env/mem_callbacks/mem_callbacks.o 00:04:15.394 CXX test/cpp_headers/accel_module.o 00:04:15.394 CC test/event/event_perf/event_perf.o 00:04:15.394 CXX test/cpp_headers/assert.o 00:04:15.652 LINK vtophys 00:04:15.652 LINK env_dpdk_post_init 00:04:15.652 LINK mem_callbacks 00:04:15.652 LINK event_perf 00:04:15.652 LINK thread 00:04:15.652 CXX test/cpp_headers/barrier.o 00:04:15.652 CXX test/cpp_headers/base64.o 00:04:15.652 LINK spdk_top 00:04:15.652 LINK nvme_fuzz 00:04:15.911 LINK spdk_nvme 00:04:15.911 CC test/nvme/aer/aer.o 00:04:15.911 CC test/rpc_client/rpc_client_test.o 00:04:15.911 CC test/env/memory/memory_ut.o 00:04:15.911 CXX test/cpp_headers/bdev.o 00:04:15.911 CXX test/cpp_headers/bdev_module.o 00:04:15.911 CC test/event/reactor/reactor.o 00:04:15.911 CC test/event/reactor_perf/reactor_perf.o 00:04:15.911 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:16.170 LINK rpc_client_test 00:04:16.170 CC app/fio/bdev/fio_plugin.o 00:04:16.170 CC examples/sock/hello_world/hello_sock.o 00:04:16.170 LINK reactor_perf 00:04:16.170 CXX test/cpp_headers/bdev_zone.o 00:04:16.170 LINK aer 00:04:16.170 LINK reactor 00:04:16.170 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:16.429 CXX test/cpp_headers/bit_array.o 00:04:16.429 LINK hello_sock 00:04:16.429 CXX test/cpp_headers/bit_pool.o 00:04:16.429 CC test/event/app_repeat/app_repeat.o 00:04:16.429 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:16.429 CC test/accel/dif/dif.o 00:04:16.429 CC test/nvme/reset/reset.o 00:04:16.687 CXX test/cpp_headers/blob_bdev.o 00:04:16.687 CC test/nvme/sgl/sgl.o 00:04:16.687 LINK spdk_bdev 00:04:16.687 LINK app_repeat 00:04:16.687 LINK memory_ut 00:04:16.944 LINK reset 00:04:16.944 CC examples/vmd/lsvmd/lsvmd.o 00:04:16.944 CXX test/cpp_headers/blobfs_bdev.o 00:04:16.944 CC examples/vmd/led/led.o 00:04:16.944 LINK sgl 00:04:16.944 CC test/event/scheduler/scheduler.o 00:04:16.944 LINK lsvmd 00:04:16.944 CC test/env/pci/pci_ut.o 00:04:16.944 LINK vhost_fuzz 00:04:16.944 CXX test/cpp_headers/blobfs.o 00:04:17.202 LINK led 00:04:17.202 LINK dif 00:04:17.202 CC test/nvme/e2edp/nvme_dp.o 00:04:17.202 CXX test/cpp_headers/blob.o 00:04:17.202 CC test/nvme/overhead/overhead.o 00:04:17.202 LINK scheduler 00:04:17.202 CC test/nvme/startup/startup.o 00:04:17.202 CC test/nvme/err_injection/err_injection.o 00:04:17.460 CXX test/cpp_headers/conf.o 00:04:17.460 CC examples/idxd/perf/perf.o 00:04:17.460 LINK startup 00:04:17.460 LINK pci_ut 00:04:17.460 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:17.460 LINK err_injection 00:04:17.460 LINK nvme_dp 00:04:17.460 CXX test/cpp_headers/config.o 00:04:17.460 LINK overhead 00:04:17.718 CXX test/cpp_headers/cpuset.o 00:04:17.718 CC examples/accel/perf/accel_perf.o 00:04:17.718 CC test/nvme/reserve/reserve.o 00:04:17.718 CXX test/cpp_headers/crc16.o 00:04:17.718 CXX test/cpp_headers/crc32.o 00:04:17.718 CC test/nvme/simple_copy/simple_copy.o 00:04:17.718 LINK hello_fsdev 00:04:17.718 LINK idxd_perf 00:04:17.976 CC examples/blob/hello_world/hello_blob.o 00:04:17.976 CXX test/cpp_headers/crc64.o 00:04:17.976 LINK reserve 00:04:17.976 CC test/blobfs/mkfs/mkfs.o 00:04:17.976 CC test/nvme/connect_stress/connect_stress.o 00:04:17.976 LINK simple_copy 00:04:17.976 CXX test/cpp_headers/dif.o 00:04:17.976 LINK iscsi_fuzz 00:04:18.234 CC test/nvme/boot_partition/boot_partition.o 00:04:18.234 LINK hello_blob 00:04:18.234 LINK mkfs 00:04:18.234 CC test/lvol/esnap/esnap.o 00:04:18.234 LINK connect_stress 00:04:18.234 LINK accel_perf 00:04:18.234 CXX test/cpp_headers/dma.o 00:04:18.234 LINK boot_partition 00:04:18.234 CC test/bdev/bdevio/bdevio.o 00:04:18.492 CC examples/nvme/hello_world/hello_world.o 00:04:18.492 CXX test/cpp_headers/endian.o 00:04:18.492 CC test/nvme/compliance/nvme_compliance.o 00:04:18.492 CC test/app/histogram_perf/histogram_perf.o 00:04:18.492 CC examples/blob/cli/blobcli.o 00:04:18.492 CC test/nvme/fused_ordering/fused_ordering.o 00:04:18.492 CC test/app/jsoncat/jsoncat.o 00:04:18.492 CC test/app/stub/stub.o 00:04:18.492 CXX test/cpp_headers/env_dpdk.o 00:04:18.492 LINK hello_world 00:04:18.750 LINK histogram_perf 00:04:18.750 LINK jsoncat 00:04:18.750 LINK fused_ordering 00:04:18.750 LINK stub 00:04:18.750 CXX test/cpp_headers/env.o 00:04:18.750 LINK bdevio 00:04:18.750 CXX test/cpp_headers/event.o 00:04:18.750 LINK nvme_compliance 00:04:18.750 CXX test/cpp_headers/fd_group.o 00:04:19.009 CC examples/nvme/reconnect/reconnect.o 00:04:19.009 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:19.009 CXX test/cpp_headers/fd.o 00:04:19.009 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:19.009 CC test/nvme/fdp/fdp.o 00:04:19.009 CC examples/nvme/arbitration/arbitration.o 00:04:19.009 LINK blobcli 00:04:19.009 CXX test/cpp_headers/file.o 00:04:19.009 CC examples/nvme/hotplug/hotplug.o 00:04:19.009 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:19.266 LINK doorbell_aers 00:04:19.267 CXX test/cpp_headers/fsdev.o 00:04:19.267 LINK reconnect 00:04:19.267 LINK cmb_copy 00:04:19.524 LINK hotplug 00:04:19.524 LINK arbitration 00:04:19.524 LINK fdp 00:04:19.524 CXX test/cpp_headers/fsdev_module.o 00:04:19.524 CC examples/bdev/hello_world/hello_bdev.o 00:04:19.524 LINK nvme_manage 00:04:19.524 CC test/nvme/cuse/cuse.o 00:04:19.524 CC examples/bdev/bdevperf/bdevperf.o 00:04:19.524 CC examples/nvme/abort/abort.o 00:04:19.524 CXX test/cpp_headers/ftl.o 00:04:19.524 CXX test/cpp_headers/fuse_dispatcher.o 00:04:19.524 CXX test/cpp_headers/gpt_spec.o 00:04:19.783 CXX test/cpp_headers/hexlify.o 00:04:19.783 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:19.783 LINK hello_bdev 00:04:19.783 CXX test/cpp_headers/histogram_data.o 00:04:19.783 CXX test/cpp_headers/idxd.o 00:04:19.783 CXX test/cpp_headers/idxd_spec.o 00:04:19.783 CXX test/cpp_headers/init.o 00:04:19.783 LINK pmr_persistence 00:04:19.783 CXX test/cpp_headers/ioat.o 00:04:20.042 CXX test/cpp_headers/ioat_spec.o 00:04:20.042 CXX test/cpp_headers/iscsi_spec.o 00:04:20.042 CXX test/cpp_headers/json.o 00:04:20.042 CXX test/cpp_headers/jsonrpc.o 00:04:20.042 LINK abort 00:04:20.042 CXX test/cpp_headers/keyring.o 00:04:20.042 CXX test/cpp_headers/keyring_module.o 00:04:20.042 CXX test/cpp_headers/likely.o 00:04:20.042 CXX test/cpp_headers/log.o 00:04:20.042 CXX test/cpp_headers/lvol.o 00:04:20.302 CXX test/cpp_headers/memory.o 00:04:20.302 CXX test/cpp_headers/mmio.o 00:04:20.302 CXX test/cpp_headers/nbd.o 00:04:20.302 CXX test/cpp_headers/net.o 00:04:20.302 CXX test/cpp_headers/notify.o 00:04:20.302 CXX test/cpp_headers/nvme.o 00:04:20.302 CXX test/cpp_headers/nvme_intel.o 00:04:20.302 CXX test/cpp_headers/nvme_ocssd.o 00:04:20.302 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:20.302 CXX test/cpp_headers/nvme_spec.o 00:04:20.302 CXX test/cpp_headers/nvme_zns.o 00:04:20.302 CXX test/cpp_headers/nvmf_cmd.o 00:04:20.302 LINK bdevperf 00:04:20.626 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:20.626 CXX test/cpp_headers/nvmf.o 00:04:20.626 CXX test/cpp_headers/nvmf_spec.o 00:04:20.626 CXX test/cpp_headers/nvmf_transport.o 00:04:20.626 CXX test/cpp_headers/opal.o 00:04:20.626 CXX test/cpp_headers/opal_spec.o 00:04:20.626 CXX test/cpp_headers/pci_ids.o 00:04:20.626 CXX test/cpp_headers/pipe.o 00:04:20.626 CXX test/cpp_headers/queue.o 00:04:20.891 CXX test/cpp_headers/reduce.o 00:04:20.891 CXX test/cpp_headers/rpc.o 00:04:20.891 CXX test/cpp_headers/scheduler.o 00:04:20.891 CXX test/cpp_headers/scsi.o 00:04:20.891 CXX test/cpp_headers/scsi_spec.o 00:04:20.891 CXX test/cpp_headers/sock.o 00:04:20.891 CXX test/cpp_headers/stdinc.o 00:04:20.891 CXX test/cpp_headers/string.o 00:04:20.891 CC examples/nvmf/nvmf/nvmf.o 00:04:20.891 CXX test/cpp_headers/thread.o 00:04:20.891 LINK cuse 00:04:20.891 CXX test/cpp_headers/trace.o 00:04:20.891 CXX test/cpp_headers/trace_parser.o 00:04:20.891 CXX test/cpp_headers/tree.o 00:04:20.891 CXX test/cpp_headers/ublk.o 00:04:20.891 CXX test/cpp_headers/util.o 00:04:21.151 CXX test/cpp_headers/uuid.o 00:04:21.151 CXX test/cpp_headers/version.o 00:04:21.151 CXX test/cpp_headers/vfio_user_pci.o 00:04:21.151 CXX test/cpp_headers/vfio_user_spec.o 00:04:21.151 CXX test/cpp_headers/vhost.o 00:04:21.151 CXX test/cpp_headers/vmd.o 00:04:21.151 CXX test/cpp_headers/xor.o 00:04:21.151 CXX test/cpp_headers/zipf.o 00:04:21.151 LINK nvmf 00:04:24.439 LINK esnap 00:04:24.698 00:04:24.698 real 0m58.178s 00:04:24.698 user 5m9.506s 00:04:24.698 sys 1m6.208s 00:04:24.698 06:35:51 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:24.698 06:35:51 make -- common/autotest_common.sh@10 -- $ set +x 00:04:24.698 ************************************ 00:04:24.698 END TEST make 00:04:24.698 ************************************ 00:04:24.698 06:35:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:24.698 06:35:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:24.698 06:35:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:24.698 06:35:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.698 06:35:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:24.956 06:35:51 -- pm/common@44 -- $ pid=6207 00:04:24.956 06:35:51 -- pm/common@50 -- $ kill -TERM 6207 00:04:24.956 06:35:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.956 06:35:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:24.956 06:35:51 -- pm/common@44 -- $ pid=6209 00:04:24.956 06:35:51 -- pm/common@50 -- $ kill -TERM 6209 00:04:24.956 06:35:52 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:24.956 06:35:52 -- nvmf/common.sh@7 -- # uname -s 00:04:24.956 06:35:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.956 06:35:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.956 06:35:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.956 06:35:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.956 06:35:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.956 06:35:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.956 06:35:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.956 06:35:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.956 06:35:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.956 06:35:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.956 06:35:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4260025c-9f07-406e-a2ce-e26fb147f69f 00:04:24.956 06:35:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=4260025c-9f07-406e-a2ce-e26fb147f69f 00:04:24.956 06:35:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.956 06:35:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.956 06:35:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:24.956 06:35:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.956 06:35:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:24.956 06:35:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.956 06:35:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.956 06:35:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.957 06:35:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.957 06:35:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.957 06:35:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.957 06:35:52 -- paths/export.sh@5 -- # export PATH 00:04:24.957 06:35:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.957 06:35:52 -- nvmf/common.sh@47 -- # : 0 00:04:24.957 06:35:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:24.957 06:35:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:24.957 06:35:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.957 06:35:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.957 06:35:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.957 06:35:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:24.957 06:35:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:24.957 06:35:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:24.957 06:35:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:24.957 06:35:52 -- spdk/autotest.sh@32 -- # uname -s 00:04:24.957 06:35:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:24.957 06:35:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:24.957 06:35:52 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:24.957 06:35:52 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:24.957 06:35:52 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:24.957 06:35:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:24.957 06:35:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:24.957 06:35:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:24.957 06:35:52 -- spdk/autotest.sh@48 -- # udevadm_pid=65637 00:04:24.957 06:35:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:24.957 06:35:52 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:24.957 06:35:52 -- pm/common@17 -- # local monitor 00:04:24.957 06:35:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.957 06:35:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.957 06:35:52 -- pm/common@25 -- # sleep 1 00:04:24.957 06:35:52 -- pm/common@21 -- # date +%s 00:04:24.957 06:35:52 -- pm/common@21 -- # date +%s 00:04:24.957 06:35:52 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1723617352 00:04:24.957 06:35:52 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1723617352 00:04:25.215 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1723617352_collect-vmstat.pm.log 00:04:25.215 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1723617352_collect-cpu-load.pm.log 00:04:26.151 06:35:53 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:26.151 06:35:53 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:26.151 06:35:53 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:26.151 06:35:53 -- common/autotest_common.sh@10 -- # set +x 00:04:26.151 06:35:53 -- spdk/autotest.sh@59 -- # create_test_list 00:04:26.151 06:35:53 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:26.151 06:35:53 -- common/autotest_common.sh@10 -- # set +x 00:04:26.151 06:35:53 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:26.151 06:35:53 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:26.151 06:35:53 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:26.151 06:35:53 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:26.151 06:35:53 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:26.151 06:35:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:26.151 06:35:53 -- common/autotest_common.sh@1451 -- # uname 00:04:26.151 06:35:53 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:26.151 06:35:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:26.151 06:35:53 -- common/autotest_common.sh@1471 -- # uname 00:04:26.151 06:35:53 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:26.151 06:35:53 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:26.151 06:35:53 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:26.151 06:35:53 -- spdk/autotest.sh@72 -- # hash lcov 00:04:26.151 06:35:53 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:26.151 06:35:53 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:26.151 --rc lcov_branch_coverage=1 00:04:26.151 --rc lcov_function_coverage=1 00:04:26.151 --rc genhtml_branch_coverage=1 00:04:26.151 --rc genhtml_function_coverage=1 00:04:26.151 --rc genhtml_legend=1 00:04:26.151 --rc geninfo_all_blocks=1 00:04:26.151 ' 00:04:26.151 06:35:53 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:26.151 --rc lcov_branch_coverage=1 00:04:26.151 --rc lcov_function_coverage=1 00:04:26.151 --rc genhtml_branch_coverage=1 00:04:26.151 --rc genhtml_function_coverage=1 00:04:26.151 --rc genhtml_legend=1 00:04:26.151 --rc geninfo_all_blocks=1 00:04:26.151 ' 00:04:26.151 06:35:53 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:26.151 --rc lcov_branch_coverage=1 00:04:26.151 --rc lcov_function_coverage=1 00:04:26.151 --rc genhtml_branch_coverage=1 00:04:26.151 --rc genhtml_function_coverage=1 00:04:26.151 --rc genhtml_legend=1 00:04:26.151 --rc geninfo_all_blocks=1 00:04:26.151 --no-external' 00:04:26.151 06:35:53 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:26.151 --rc lcov_branch_coverage=1 00:04:26.151 --rc lcov_function_coverage=1 00:04:26.151 --rc genhtml_branch_coverage=1 00:04:26.151 --rc genhtml_function_coverage=1 00:04:26.151 --rc genhtml_legend=1 00:04:26.151 --rc geninfo_all_blocks=1 00:04:26.151 --no-external' 00:04:26.151 06:35:53 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:26.151 lcov: LCOV version 1.15 00:04:26.151 06:35:53 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:41.104 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:41.104 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:53.320 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:53.320 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fsdev.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fsdev.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fsdev_module.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fsdev_module.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fuse_dispatcher.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fuse_dispatcher.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:53.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:53.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:53.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:53.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:53.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:53.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:53.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:53.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:53.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:53.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:53.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:53.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:53.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:53.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:53.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:53.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:53.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:53.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:53.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:53.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:53.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:53.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:55.878 06:36:22 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:55.878 06:36:22 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:55.878 06:36:22 -- common/autotest_common.sh@10 -- # set +x 00:04:55.878 06:36:22 -- spdk/autotest.sh@91 -- # rm -f 00:04:55.878 06:36:22 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:56.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.445 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:56.703 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:56.703 06:36:23 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:56.703 06:36:23 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:56.703 06:36:23 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:56.703 06:36:23 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:56.703 06:36:23 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:56.703 06:36:23 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:56.703 06:36:23 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:56.703 06:36:23 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:56.703 06:36:23 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:56.703 06:36:23 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:56.703 06:36:23 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:56.703 06:36:23 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:56.703 06:36:23 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:56.703 06:36:23 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:56.703 06:36:23 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:56.703 06:36:23 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:04:56.703 06:36:23 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:04:56.703 06:36:23 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:56.703 06:36:23 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:56.703 06:36:23 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:56.703 06:36:23 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:04:56.703 06:36:23 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:04:56.703 06:36:23 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:56.703 06:36:23 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:56.703 06:36:23 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:56.703 06:36:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:56.703 06:36:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:56.703 06:36:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:56.703 06:36:23 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:56.703 06:36:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:56.703 No valid GPT data, bailing 00:04:56.703 06:36:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:56.703 06:36:23 -- scripts/common.sh@391 -- # pt= 00:04:56.703 06:36:23 -- scripts/common.sh@392 -- # return 1 00:04:56.703 06:36:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:56.703 1+0 records in 00:04:56.703 1+0 records out 00:04:56.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0063226 s, 166 MB/s 00:04:56.703 06:36:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:56.703 06:36:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:56.703 06:36:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:56.703 06:36:23 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:56.703 06:36:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:56.703 No valid GPT data, bailing 00:04:56.703 06:36:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:56.703 06:36:23 -- scripts/common.sh@391 -- # pt= 00:04:56.703 06:36:23 -- scripts/common.sh@392 -- # return 1 00:04:56.703 06:36:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:56.703 1+0 records in 00:04:56.703 1+0 records out 00:04:56.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432116 s, 243 MB/s 00:04:56.703 06:36:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:56.703 06:36:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:56.703 06:36:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:56.703 06:36:23 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:56.704 06:36:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:56.704 No valid GPT data, bailing 00:04:56.962 06:36:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:56.962 06:36:23 -- scripts/common.sh@391 -- # pt= 00:04:56.962 06:36:23 -- scripts/common.sh@392 -- # return 1 00:04:56.962 06:36:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:56.962 1+0 records in 00:04:56.962 1+0 records out 00:04:56.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00371969 s, 282 MB/s 00:04:56.962 06:36:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:56.962 06:36:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:56.962 06:36:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:56.962 06:36:23 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:56.962 06:36:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:56.962 No valid GPT data, bailing 00:04:56.962 06:36:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:56.962 06:36:24 -- scripts/common.sh@391 -- # pt= 00:04:56.962 06:36:24 -- scripts/common.sh@392 -- # return 1 00:04:56.962 06:36:24 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:56.962 1+0 records in 00:04:56.962 1+0 records out 00:04:56.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00647683 s, 162 MB/s 00:04:56.962 06:36:24 -- spdk/autotest.sh@118 -- # sync 00:04:56.962 06:36:24 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:56.962 06:36:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:56.962 06:36:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:00.277 06:36:26 -- spdk/autotest.sh@124 -- # uname -s 00:05:00.277 06:36:26 -- spdk/autotest.sh@124 -- # [[ Linux == Linux ]] 00:05:00.277 06:36:26 -- spdk/autotest.sh@124 -- # [[ 0 -eq 1 ]] 00:05:00.277 06:36:26 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:00.551 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.551 Hugepages 00:05:00.551 node hugesize free / total 00:05:00.551 node0 1048576kB 0 / 0 00:05:00.551 node0 2048kB 0 / 0 00:05:00.551 00:05:00.551 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:00.551 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:00.811 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:00.811 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:00.811 06:36:28 -- spdk/autotest.sh@130 -- # uname -s 00:05:00.811 06:36:28 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:00.811 06:36:28 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:00.811 06:36:28 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:01.750 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.750 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.750 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.010 06:36:29 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:02.947 06:36:30 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:02.947 06:36:30 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:02.947 06:36:30 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:02.947 06:36:30 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:02.947 06:36:30 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:02.947 06:36:30 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:02.947 06:36:30 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:02.947 06:36:30 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:02.947 06:36:30 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:02.947 06:36:30 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:05:02.947 06:36:30 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:02.947 06:36:30 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:03.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.517 Waiting for block devices as requested 00:05:03.517 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:03.777 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:03.777 06:36:30 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:03.777 06:36:30 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:03.777 06:36:30 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:05:03.777 06:36:30 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:03.777 06:36:30 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:03.777 06:36:30 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:03.777 06:36:30 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:03.777 06:36:30 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:05:03.777 06:36:30 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:05:03.777 06:36:30 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:05:03.777 06:36:30 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:05:03.777 06:36:30 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:03.777 06:36:30 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:03.777 06:36:30 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:03.777 06:36:30 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:03.777 06:36:30 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:03.777 06:36:30 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:05:03.777 06:36:30 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:03.777 06:36:30 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:03.777 06:36:30 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:03.777 06:36:30 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:03.777 06:36:30 -- common/autotest_common.sh@1553 -- # continue 00:05:03.777 06:36:30 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:03.777 06:36:30 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:03.777 06:36:30 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:03.777 06:36:30 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:05:03.777 06:36:30 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:03.777 06:36:30 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:03.777 06:36:30 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:03.777 06:36:30 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:03.777 06:36:30 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:03.777 06:36:30 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:03.777 06:36:30 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:03.777 06:36:30 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:03.777 06:36:30 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:03.777 06:36:30 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:03.777 06:36:30 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:03.777 06:36:30 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:03.777 06:36:30 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:03.777 06:36:30 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:03.777 06:36:30 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:03.777 06:36:30 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:03.777 06:36:30 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:03.777 06:36:30 -- common/autotest_common.sh@1553 -- # continue 00:05:03.777 06:36:30 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:03.777 06:36:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.777 06:36:30 -- common/autotest_common.sh@10 -- # set +x 00:05:03.777 06:36:31 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:03.777 06:36:31 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:03.777 06:36:31 -- common/autotest_common.sh@10 -- # set +x 00:05:03.777 06:36:31 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.716 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.716 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.976 06:36:32 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:04.976 06:36:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.976 06:36:32 -- common/autotest_common.sh@10 -- # set +x 00:05:04.976 06:36:32 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:04.976 06:36:32 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:04.976 06:36:32 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:04.976 06:36:32 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:04.976 06:36:32 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:04.976 06:36:32 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:04.976 06:36:32 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:04.976 06:36:32 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:04.976 06:36:32 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:04.976 06:36:32 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:04.976 06:36:32 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:04.976 06:36:32 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:05:04.976 06:36:32 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:04.976 06:36:32 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:04.976 06:36:32 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:04.976 06:36:32 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:04.976 06:36:32 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:04.976 06:36:32 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:04.976 06:36:32 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:04.976 06:36:32 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:04.976 06:36:32 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:04.976 06:36:32 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:05:04.976 06:36:32 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:05:04.976 06:36:32 -- common/autotest_common.sh@1589 -- # return 0 00:05:04.976 06:36:32 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:04.976 06:36:32 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:04.976 06:36:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:04.976 06:36:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:04.976 06:36:32 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:04.976 06:36:32 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:04.976 06:36:32 -- common/autotest_common.sh@10 -- # set +x 00:05:04.976 06:36:32 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:04.976 06:36:32 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:04.976 06:36:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.976 06:36:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.976 06:36:32 -- common/autotest_common.sh@10 -- # set +x 00:05:04.976 ************************************ 00:05:04.976 START TEST env 00:05:04.976 ************************************ 00:05:04.976 06:36:32 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.236 * Looking for test storage... 00:05:05.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:05.236 06:36:32 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.236 06:36:32 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.236 06:36:32 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.236 06:36:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.236 ************************************ 00:05:05.236 START TEST env_memory 00:05:05.236 ************************************ 00:05:05.236 06:36:32 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.236 00:05:05.236 00:05:05.236 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.236 http://cunit.sourceforge.net/ 00:05:05.236 00:05:05.236 00:05:05.236 Suite: memory 00:05:05.236 Test: alloc and free memory map ...[2024-08-14 06:36:32.414476] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:05.236 passed 00:05:05.236 Test: mem map translation ...[2024-08-14 06:36:32.458235] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:05.236 [2024-08-14 06:36:32.458279] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:05.236 [2024-08-14 06:36:32.458332] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:05.236 [2024-08-14 06:36:32.458357] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:05.496 passed 00:05:05.496 Test: mem map registration ...[2024-08-14 06:36:32.523734] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:05.496 [2024-08-14 06:36:32.523776] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:05.496 passed 00:05:05.496 Test: mem map adjacent registrations ...passed 00:05:05.496 00:05:05.496 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.496 suites 1 1 n/a 0 0 00:05:05.496 tests 4 4 4 0 0 00:05:05.496 asserts 152 152 152 0 n/a 00:05:05.496 00:05:05.496 Elapsed time = 0.238 seconds 00:05:05.496 00:05:05.496 real 0m0.290s 00:05:05.496 user 0m0.255s 00:05:05.496 sys 0m0.026s 00:05:05.496 06:36:32 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.496 06:36:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:05.496 ************************************ 00:05:05.496 END TEST env_memory 00:05:05.496 ************************************ 00:05:05.496 06:36:32 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:05.496 06:36:32 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.496 06:36:32 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.496 06:36:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.496 ************************************ 00:05:05.496 START TEST env_vtophys 00:05:05.496 ************************************ 00:05:05.496 06:36:32 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:05.496 EAL: lib.eal log level changed from notice to debug 00:05:05.496 EAL: Detected lcore 0 as core 0 on socket 0 00:05:05.496 EAL: Detected lcore 1 as core 0 on socket 0 00:05:05.496 EAL: Detected lcore 2 as core 0 on socket 0 00:05:05.496 EAL: Detected lcore 3 as core 0 on socket 0 00:05:05.496 EAL: Detected lcore 4 as core 0 on socket 0 00:05:05.496 EAL: Detected lcore 5 as core 0 on socket 0 00:05:05.496 EAL: Detected lcore 6 as core 0 on socket 0 00:05:05.496 EAL: Detected lcore 7 as core 0 on socket 0 00:05:05.496 EAL: Detected lcore 8 as core 0 on socket 0 00:05:05.496 EAL: Detected lcore 9 as core 0 on socket 0 00:05:05.756 EAL: Maximum logical cores by configuration: 128 00:05:05.756 EAL: Detected CPU lcores: 10 00:05:05.756 EAL: Detected NUMA nodes: 1 00:05:05.756 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:05.756 EAL: Detected shared linkage of DPDK 00:05:05.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:05.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:05.756 EAL: Registered [vdev] bus. 00:05:05.756 EAL: bus.vdev log level changed from disabled to notice 00:05:05.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:05.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:05.756 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:05.756 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:05.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:05.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:05.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:05.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:05.756 EAL: No shared files mode enabled, IPC will be disabled 00:05:05.756 EAL: No shared files mode enabled, IPC is disabled 00:05:05.756 EAL: Selected IOVA mode 'PA' 00:05:05.756 EAL: Probing VFIO support... 00:05:05.756 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:05.756 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:05.756 EAL: Ask a virtual area of 0x2e000 bytes 00:05:05.756 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:05.756 EAL: Setting up physically contiguous memory... 00:05:05.756 EAL: Setting maximum number of open files to 524288 00:05:05.756 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:05.756 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:05.756 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.756 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:05.756 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.756 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.756 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:05.756 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:05.756 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.756 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:05.756 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.756 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.756 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:05.756 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:05.756 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.756 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:05.756 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.756 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.756 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:05.756 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:05.756 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.756 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:05.756 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.756 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.756 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:05.756 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:05.756 EAL: Hugepages will be freed exactly as allocated. 00:05:05.756 EAL: No shared files mode enabled, IPC is disabled 00:05:05.756 EAL: No shared files mode enabled, IPC is disabled 00:05:05.756 EAL: TSC frequency is ~2290000 KHz 00:05:05.756 EAL: Main lcore 0 is ready (tid=7f4acb212a40;cpuset=[0]) 00:05:05.756 EAL: Trying to obtain current memory policy. 00:05:05.756 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.756 EAL: Restoring previous memory policy: 0 00:05:05.756 EAL: request: mp_malloc_sync 00:05:05.756 EAL: No shared files mode enabled, IPC is disabled 00:05:05.756 EAL: Heap on socket 0 was expanded by 2MB 00:05:05.756 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:05.756 EAL: No shared files mode enabled, IPC is disabled 00:05:05.756 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:05.756 EAL: Mem event callback 'spdk:(nil)' registered 00:05:05.756 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:05.756 00:05:05.756 00:05:05.756 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.756 http://cunit.sourceforge.net/ 00:05:05.756 00:05:05.756 00:05:05.756 Suite: components_suite 00:05:06.017 Test: vtophys_malloc_test ...passed 00:05:06.017 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:06.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.017 EAL: Restoring previous memory policy: 4 00:05:06.017 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.017 EAL: request: mp_malloc_sync 00:05:06.017 EAL: No shared files mode enabled, IPC is disabled 00:05:06.017 EAL: Heap on socket 0 was expanded by 4MB 00:05:06.017 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.017 EAL: request: mp_malloc_sync 00:05:06.017 EAL: No shared files mode enabled, IPC is disabled 00:05:06.017 EAL: Heap on socket 0 was shrunk by 4MB 00:05:06.017 EAL: Trying to obtain current memory policy. 00:05:06.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.017 EAL: Restoring previous memory policy: 4 00:05:06.017 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.017 EAL: request: mp_malloc_sync 00:05:06.017 EAL: No shared files mode enabled, IPC is disabled 00:05:06.017 EAL: Heap on socket 0 was expanded by 6MB 00:05:06.017 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.017 EAL: request: mp_malloc_sync 00:05:06.017 EAL: No shared files mode enabled, IPC is disabled 00:05:06.017 EAL: Heap on socket 0 was shrunk by 6MB 00:05:06.017 EAL: Trying to obtain current memory policy. 00:05:06.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.017 EAL: Restoring previous memory policy: 4 00:05:06.017 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.017 EAL: request: mp_malloc_sync 00:05:06.017 EAL: No shared files mode enabled, IPC is disabled 00:05:06.017 EAL: Heap on socket 0 was expanded by 10MB 00:05:06.017 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.017 EAL: request: mp_malloc_sync 00:05:06.017 EAL: No shared files mode enabled, IPC is disabled 00:05:06.017 EAL: Heap on socket 0 was shrunk by 10MB 00:05:06.017 EAL: Trying to obtain current memory policy. 00:05:06.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.017 EAL: Restoring previous memory policy: 4 00:05:06.017 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.017 EAL: request: mp_malloc_sync 00:05:06.017 EAL: No shared files mode enabled, IPC is disabled 00:05:06.017 EAL: Heap on socket 0 was expanded by 18MB 00:05:06.017 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.017 EAL: request: mp_malloc_sync 00:05:06.017 EAL: No shared files mode enabled, IPC is disabled 00:05:06.017 EAL: Heap on socket 0 was shrunk by 18MB 00:05:06.017 EAL: Trying to obtain current memory policy. 00:05:06.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.017 EAL: Restoring previous memory policy: 4 00:05:06.017 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.017 EAL: request: mp_malloc_sync 00:05:06.017 EAL: No shared files mode enabled, IPC is disabled 00:05:06.017 EAL: Heap on socket 0 was expanded by 34MB 00:05:06.017 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.017 EAL: request: mp_malloc_sync 00:05:06.017 EAL: No shared files mode enabled, IPC is disabled 00:05:06.017 EAL: Heap on socket 0 was shrunk by 34MB 00:05:06.017 EAL: Trying to obtain current memory policy. 00:05:06.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.017 EAL: Restoring previous memory policy: 4 00:05:06.017 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.017 EAL: request: mp_malloc_sync 00:05:06.017 EAL: No shared files mode enabled, IPC is disabled 00:05:06.017 EAL: Heap on socket 0 was expanded by 66MB 00:05:06.277 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.277 EAL: request: mp_malloc_sync 00:05:06.277 EAL: No shared files mode enabled, IPC is disabled 00:05:06.277 EAL: Heap on socket 0 was shrunk by 66MB 00:05:06.277 EAL: Trying to obtain current memory policy. 00:05:06.277 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.277 EAL: Restoring previous memory policy: 4 00:05:06.277 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.277 EAL: request: mp_malloc_sync 00:05:06.277 EAL: No shared files mode enabled, IPC is disabled 00:05:06.277 EAL: Heap on socket 0 was expanded by 130MB 00:05:06.277 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.277 EAL: request: mp_malloc_sync 00:05:06.277 EAL: No shared files mode enabled, IPC is disabled 00:05:06.277 EAL: Heap on socket 0 was shrunk by 130MB 00:05:06.277 EAL: Trying to obtain current memory policy. 00:05:06.277 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.277 EAL: Restoring previous memory policy: 4 00:05:06.277 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.277 EAL: request: mp_malloc_sync 00:05:06.277 EAL: No shared files mode enabled, IPC is disabled 00:05:06.277 EAL: Heap on socket 0 was expanded by 258MB 00:05:06.277 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.277 EAL: request: mp_malloc_sync 00:05:06.277 EAL: No shared files mode enabled, IPC is disabled 00:05:06.277 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.277 EAL: Trying to obtain current memory policy. 00:05:06.277 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.537 EAL: Restoring previous memory policy: 4 00:05:06.537 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.537 EAL: request: mp_malloc_sync 00:05:06.537 EAL: No shared files mode enabled, IPC is disabled 00:05:06.537 EAL: Heap on socket 0 was expanded by 514MB 00:05:06.537 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.537 EAL: request: mp_malloc_sync 00:05:06.537 EAL: No shared files mode enabled, IPC is disabled 00:05:06.537 EAL: Heap on socket 0 was shrunk by 514MB 00:05:06.537 EAL: Trying to obtain current memory policy. 00:05:06.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.796 EAL: Restoring previous memory policy: 4 00:05:06.796 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.796 EAL: request: mp_malloc_sync 00:05:06.796 EAL: No shared files mode enabled, IPC is disabled 00:05:06.796 EAL: Heap on socket 0 was expanded by 1026MB 00:05:07.056 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.056 passed 00:05:07.056 00:05:07.056 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.056 suites 1 1 n/a 0 0 00:05:07.056 tests 2 2 2 0 0 00:05:07.056 asserts 5337 5337 5337 0 n/a 00:05:07.056 00:05:07.056 Elapsed time = 1.370 seconds 00:05:07.056 EAL: request: mp_malloc_sync 00:05:07.056 EAL: No shared files mode enabled, IPC is disabled 00:05:07.056 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:07.056 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.056 EAL: request: mp_malloc_sync 00:05:07.056 EAL: No shared files mode enabled, IPC is disabled 00:05:07.056 EAL: Heap on socket 0 was shrunk by 2MB 00:05:07.056 EAL: No shared files mode enabled, IPC is disabled 00:05:07.056 EAL: No shared files mode enabled, IPC is disabled 00:05:07.056 EAL: No shared files mode enabled, IPC is disabled 00:05:07.056 00:05:07.056 real 0m1.596s 00:05:07.056 user 0m0.752s 00:05:07.056 sys 0m0.714s 00:05:07.056 06:36:34 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.056 06:36:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:07.056 ************************************ 00:05:07.056 END TEST env_vtophys 00:05:07.056 ************************************ 00:05:07.316 06:36:34 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:07.316 06:36:34 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.316 06:36:34 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.316 06:36:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.316 ************************************ 00:05:07.316 START TEST env_pci 00:05:07.316 ************************************ 00:05:07.316 06:36:34 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:07.316 00:05:07.316 00:05:07.316 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.316 http://cunit.sourceforge.net/ 00:05:07.316 00:05:07.316 00:05:07.316 Suite: pci 00:05:07.316 Test: pci_hook ...[2024-08-14 06:36:34.392652] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68002 has claimed it 00:05:07.316 passed 00:05:07.316 00:05:07.316 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.316 suites 1 1 n/a 0 0 00:05:07.316 tests 1 1 1 0 0 00:05:07.316 asserts 25 25 25 0 n/a 00:05:07.316 00:05:07.316 Elapsed time = 0.008 seconds 00:05:07.316 EAL: Cannot find device (10000:00:01.0) 00:05:07.316 EAL: Failed to attach device on primary process 00:05:07.316 00:05:07.316 real 0m0.093s 00:05:07.316 user 0m0.040s 00:05:07.316 sys 0m0.052s 00:05:07.316 06:36:34 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.316 06:36:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:07.316 ************************************ 00:05:07.316 END TEST env_pci 00:05:07.316 ************************************ 00:05:07.316 06:36:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:07.316 06:36:34 env -- env/env.sh@15 -- # uname 00:05:07.316 06:36:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:07.316 06:36:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:07.316 06:36:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.316 06:36:34 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:07.316 06:36:34 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.316 06:36:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.316 ************************************ 00:05:07.316 START TEST env_dpdk_post_init 00:05:07.316 ************************************ 00:05:07.316 06:36:34 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.576 EAL: Detected CPU lcores: 10 00:05:07.576 EAL: Detected NUMA nodes: 1 00:05:07.576 EAL: Detected shared linkage of DPDK 00:05:07.576 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:07.576 EAL: Selected IOVA mode 'PA' 00:05:07.576 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:07.576 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:07.576 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:07.576 Starting DPDK initialization... 00:05:07.576 Starting SPDK post initialization... 00:05:07.576 SPDK NVMe probe 00:05:07.576 Attaching to 0000:00:10.0 00:05:07.576 Attaching to 0000:00:11.0 00:05:07.576 Attached to 0000:00:10.0 00:05:07.576 Attached to 0000:00:11.0 00:05:07.576 Cleaning up... 00:05:07.576 00:05:07.576 real 0m0.232s 00:05:07.576 user 0m0.065s 00:05:07.576 sys 0m0.068s 00:05:07.576 06:36:34 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.576 06:36:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:07.576 ************************************ 00:05:07.576 END TEST env_dpdk_post_init 00:05:07.576 ************************************ 00:05:07.576 06:36:34 env -- env/env.sh@26 -- # uname 00:05:07.576 06:36:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:07.576 06:36:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:07.576 06:36:34 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.576 06:36:34 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.576 06:36:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.842 ************************************ 00:05:07.842 START TEST env_mem_callbacks 00:05:07.842 ************************************ 00:05:07.842 06:36:34 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:07.842 EAL: Detected CPU lcores: 10 00:05:07.842 EAL: Detected NUMA nodes: 1 00:05:07.842 EAL: Detected shared linkage of DPDK 00:05:07.842 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:07.842 EAL: Selected IOVA mode 'PA' 00:05:07.842 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:07.842 00:05:07.842 00:05:07.842 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.842 http://cunit.sourceforge.net/ 00:05:07.842 00:05:07.842 00:05:07.842 Suite: memory 00:05:07.842 Test: test ... 00:05:07.842 register 0x200000200000 2097152 00:05:07.842 malloc 3145728 00:05:07.842 register 0x200000400000 4194304 00:05:07.842 buf 0x200000500000 len 3145728 PASSED 00:05:07.842 malloc 64 00:05:07.842 buf 0x2000004fff40 len 64 PASSED 00:05:07.842 malloc 4194304 00:05:07.842 register 0x200000800000 6291456 00:05:07.842 buf 0x200000a00000 len 4194304 PASSED 00:05:07.842 free 0x200000500000 3145728 00:05:07.842 free 0x2000004fff40 64 00:05:07.842 unregister 0x200000400000 4194304 PASSED 00:05:07.842 free 0x200000a00000 4194304 00:05:07.842 unregister 0x200000800000 6291456 PASSED 00:05:07.842 malloc 8388608 00:05:07.842 register 0x200000400000 10485760 00:05:07.842 buf 0x200000600000 len 8388608 PASSED 00:05:07.842 free 0x200000600000 8388608 00:05:07.842 unregister 0x200000400000 10485760 PASSED 00:05:07.842 passed 00:05:07.842 00:05:07.842 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.842 suites 1 1 n/a 0 0 00:05:07.842 tests 1 1 1 0 0 00:05:07.842 asserts 15 15 15 0 n/a 00:05:07.842 00:05:07.842 Elapsed time = 0.011 seconds 00:05:07.842 00:05:07.842 real 0m0.181s 00:05:07.842 user 0m0.030s 00:05:07.842 sys 0m0.050s 00:05:07.842 06:36:35 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.842 06:36:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:07.842 ************************************ 00:05:07.842 END TEST env_mem_callbacks 00:05:07.842 ************************************ 00:05:07.842 00:05:07.842 real 0m2.857s 00:05:07.842 user 0m1.298s 00:05:07.842 sys 0m1.236s 00:05:07.842 06:36:35 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.843 06:36:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.843 ************************************ 00:05:07.843 END TEST env 00:05:07.843 ************************************ 00:05:08.123 06:36:35 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:08.123 06:36:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.123 06:36:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.123 06:36:35 -- common/autotest_common.sh@10 -- # set +x 00:05:08.123 ************************************ 00:05:08.123 START TEST rpc 00:05:08.123 ************************************ 00:05:08.123 06:36:35 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:08.123 * Looking for test storage... 00:05:08.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:08.123 06:36:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68115 00:05:08.123 06:36:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.123 06:36:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68115 00:05:08.123 06:36:35 rpc -- common/autotest_common.sh@827 -- # '[' -z 68115 ']' 00:05:08.123 06:36:35 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.123 06:36:35 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:08.123 06:36:35 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:08.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.123 06:36:35 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.123 06:36:35 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:08.123 06:36:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.123 [2024-08-14 06:36:35.353048] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:05:08.123 [2024-08-14 06:36:35.353182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68115 ] 00:05:08.382 [2024-08-14 06:36:35.482495] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.382 [2024-08-14 06:36:35.527731] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:08.382 [2024-08-14 06:36:35.527785] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68115' to capture a snapshot of events at runtime. 00:05:08.382 [2024-08-14 06:36:35.527802] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:08.382 [2024-08-14 06:36:35.527812] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:08.382 [2024-08-14 06:36:35.527826] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68115 for offline analysis/debug. 00:05:08.382 [2024-08-14 06:36:35.527867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.952 06:36:36 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:08.952 06:36:36 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:08.952 06:36:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:08.952 06:36:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:08.952 06:36:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:08.952 06:36:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:08.952 06:36:36 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.952 06:36:36 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.952 06:36:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.952 ************************************ 00:05:08.952 START TEST rpc_integrity 00:05:08.952 ************************************ 00:05:08.952 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:08.952 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:08.952 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:08.952 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.952 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:08.952 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:08.952 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:09.213 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.213 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.213 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.213 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.213 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.213 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:09.213 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.213 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.213 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.213 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.213 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.213 { 00:05:09.213 "name": "Malloc0", 00:05:09.213 "aliases": [ 00:05:09.213 "ec0b85f0-ae32-4f20-8e7b-32e3835fb76c" 00:05:09.213 ], 00:05:09.213 "product_name": "Malloc disk", 00:05:09.213 "block_size": 512, 00:05:09.213 "num_blocks": 16384, 00:05:09.213 "uuid": "ec0b85f0-ae32-4f20-8e7b-32e3835fb76c", 00:05:09.213 "assigned_rate_limits": { 00:05:09.213 "rw_ios_per_sec": 0, 00:05:09.213 "rw_mbytes_per_sec": 0, 00:05:09.213 "r_mbytes_per_sec": 0, 00:05:09.213 "w_mbytes_per_sec": 0 00:05:09.213 }, 00:05:09.213 "claimed": false, 00:05:09.213 "zoned": false, 00:05:09.213 "supported_io_types": { 00:05:09.213 "read": true, 00:05:09.213 "write": true, 00:05:09.213 "unmap": true, 00:05:09.213 "flush": true, 00:05:09.213 "reset": true, 00:05:09.213 "nvme_admin": false, 00:05:09.213 "nvme_io": false, 00:05:09.213 "nvme_io_md": false, 00:05:09.213 "write_zeroes": true, 00:05:09.213 "zcopy": true, 00:05:09.213 "get_zone_info": false, 00:05:09.213 "zone_management": false, 00:05:09.213 "zone_append": false, 00:05:09.213 "compare": false, 00:05:09.213 "compare_and_write": false, 00:05:09.213 "abort": true, 00:05:09.213 "seek_hole": false, 00:05:09.213 "seek_data": false, 00:05:09.213 "copy": true, 00:05:09.213 "nvme_iov_md": false 00:05:09.213 }, 00:05:09.213 "memory_domains": [ 00:05:09.213 { 00:05:09.213 "dma_device_id": "system", 00:05:09.213 "dma_device_type": 1 00:05:09.213 }, 00:05:09.213 { 00:05:09.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.213 "dma_device_type": 2 00:05:09.213 } 00:05:09.213 ], 00:05:09.213 "driver_specific": {} 00:05:09.213 } 00:05:09.213 ]' 00:05:09.213 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:09.213 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.213 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:09.213 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.213 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.213 [2024-08-14 06:36:36.336582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:09.213 [2024-08-14 06:36:36.336645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.213 [2024-08-14 06:36:36.336672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:05:09.213 [2024-08-14 06:36:36.336690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.213 [2024-08-14 06:36:36.338935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.213 [2024-08-14 06:36:36.338972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.213 Passthru0 00:05:09.213 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.213 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.213 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.213 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.213 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.213 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.213 { 00:05:09.213 "name": "Malloc0", 00:05:09.213 "aliases": [ 00:05:09.213 "ec0b85f0-ae32-4f20-8e7b-32e3835fb76c" 00:05:09.213 ], 00:05:09.213 "product_name": "Malloc disk", 00:05:09.213 "block_size": 512, 00:05:09.213 "num_blocks": 16384, 00:05:09.213 "uuid": "ec0b85f0-ae32-4f20-8e7b-32e3835fb76c", 00:05:09.213 "assigned_rate_limits": { 00:05:09.213 "rw_ios_per_sec": 0, 00:05:09.213 "rw_mbytes_per_sec": 0, 00:05:09.213 "r_mbytes_per_sec": 0, 00:05:09.213 "w_mbytes_per_sec": 0 00:05:09.213 }, 00:05:09.213 "claimed": true, 00:05:09.213 "claim_type": "exclusive_write", 00:05:09.213 "zoned": false, 00:05:09.213 "supported_io_types": { 00:05:09.213 "read": true, 00:05:09.213 "write": true, 00:05:09.213 "unmap": true, 00:05:09.213 "flush": true, 00:05:09.213 "reset": true, 00:05:09.213 "nvme_admin": false, 00:05:09.213 "nvme_io": false, 00:05:09.213 "nvme_io_md": false, 00:05:09.213 "write_zeroes": true, 00:05:09.213 "zcopy": true, 00:05:09.213 "get_zone_info": false, 00:05:09.213 "zone_management": false, 00:05:09.213 "zone_append": false, 00:05:09.213 "compare": false, 00:05:09.213 "compare_and_write": false, 00:05:09.213 "abort": true, 00:05:09.213 "seek_hole": false, 00:05:09.213 "seek_data": false, 00:05:09.213 "copy": true, 00:05:09.213 "nvme_iov_md": false 00:05:09.213 }, 00:05:09.213 "memory_domains": [ 00:05:09.213 { 00:05:09.213 "dma_device_id": "system", 00:05:09.213 "dma_device_type": 1 00:05:09.213 }, 00:05:09.213 { 00:05:09.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.213 "dma_device_type": 2 00:05:09.213 } 00:05:09.213 ], 00:05:09.213 "driver_specific": {} 00:05:09.213 }, 00:05:09.213 { 00:05:09.213 "name": "Passthru0", 00:05:09.213 "aliases": [ 00:05:09.213 "21b1221b-9913-5dcc-bf1f-51a4300ff134" 00:05:09.213 ], 00:05:09.213 "product_name": "passthru", 00:05:09.213 "block_size": 512, 00:05:09.213 "num_blocks": 16384, 00:05:09.213 "uuid": "21b1221b-9913-5dcc-bf1f-51a4300ff134", 00:05:09.213 "assigned_rate_limits": { 00:05:09.213 "rw_ios_per_sec": 0, 00:05:09.213 "rw_mbytes_per_sec": 0, 00:05:09.213 "r_mbytes_per_sec": 0, 00:05:09.213 "w_mbytes_per_sec": 0 00:05:09.213 }, 00:05:09.213 "claimed": false, 00:05:09.213 "zoned": false, 00:05:09.213 "supported_io_types": { 00:05:09.213 "read": true, 00:05:09.213 "write": true, 00:05:09.213 "unmap": true, 00:05:09.213 "flush": true, 00:05:09.213 "reset": true, 00:05:09.213 "nvme_admin": false, 00:05:09.213 "nvme_io": false, 00:05:09.213 "nvme_io_md": false, 00:05:09.213 "write_zeroes": true, 00:05:09.213 "zcopy": true, 00:05:09.213 "get_zone_info": false, 00:05:09.213 "zone_management": false, 00:05:09.213 "zone_append": false, 00:05:09.213 "compare": false, 00:05:09.213 "compare_and_write": false, 00:05:09.213 "abort": true, 00:05:09.213 "seek_hole": false, 00:05:09.213 "seek_data": false, 00:05:09.213 "copy": true, 00:05:09.213 "nvme_iov_md": false 00:05:09.213 }, 00:05:09.213 "memory_domains": [ 00:05:09.213 { 00:05:09.213 "dma_device_id": "system", 00:05:09.213 "dma_device_type": 1 00:05:09.213 }, 00:05:09.213 { 00:05:09.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.213 "dma_device_type": 2 00:05:09.213 } 00:05:09.213 ], 00:05:09.213 "driver_specific": { 00:05:09.213 "passthru": { 00:05:09.213 "name": "Passthru0", 00:05:09.213 "base_bdev_name": "Malloc0" 00:05:09.213 } 00:05:09.213 } 00:05:09.213 } 00:05:09.214 ]' 00:05:09.214 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:09.214 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:09.214 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:09.214 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.214 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.214 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.214 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:09.214 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.214 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.214 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.214 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:09.214 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.214 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.214 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.214 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:09.214 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:09.474 06:36:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:09.474 00:05:09.474 real 0m0.321s 00:05:09.474 user 0m0.190s 00:05:09.474 sys 0m0.058s 00:05:09.474 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.474 06:36:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.474 ************************************ 00:05:09.474 END TEST rpc_integrity 00:05:09.474 ************************************ 00:05:09.474 06:36:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:09.474 06:36:36 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.474 06:36:36 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.474 06:36:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.474 ************************************ 00:05:09.474 START TEST rpc_plugins 00:05:09.474 ************************************ 00:05:09.474 06:36:36 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:09.474 06:36:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:09.474 06:36:36 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.474 06:36:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.474 06:36:36 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.474 06:36:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:09.474 06:36:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:09.474 06:36:36 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.474 06:36:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.474 06:36:36 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.474 06:36:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:09.474 { 00:05:09.474 "name": "Malloc1", 00:05:09.474 "aliases": [ 00:05:09.474 "54062e81-59bb-4e86-894d-08bf268350a3" 00:05:09.474 ], 00:05:09.474 "product_name": "Malloc disk", 00:05:09.474 "block_size": 4096, 00:05:09.474 "num_blocks": 256, 00:05:09.474 "uuid": "54062e81-59bb-4e86-894d-08bf268350a3", 00:05:09.474 "assigned_rate_limits": { 00:05:09.474 "rw_ios_per_sec": 0, 00:05:09.474 "rw_mbytes_per_sec": 0, 00:05:09.474 "r_mbytes_per_sec": 0, 00:05:09.474 "w_mbytes_per_sec": 0 00:05:09.474 }, 00:05:09.474 "claimed": false, 00:05:09.474 "zoned": false, 00:05:09.474 "supported_io_types": { 00:05:09.474 "read": true, 00:05:09.474 "write": true, 00:05:09.474 "unmap": true, 00:05:09.474 "flush": true, 00:05:09.474 "reset": true, 00:05:09.474 "nvme_admin": false, 00:05:09.474 "nvme_io": false, 00:05:09.474 "nvme_io_md": false, 00:05:09.474 "write_zeroes": true, 00:05:09.474 "zcopy": true, 00:05:09.474 "get_zone_info": false, 00:05:09.474 "zone_management": false, 00:05:09.474 "zone_append": false, 00:05:09.474 "compare": false, 00:05:09.474 "compare_and_write": false, 00:05:09.474 "abort": true, 00:05:09.474 "seek_hole": false, 00:05:09.474 "seek_data": false, 00:05:09.474 "copy": true, 00:05:09.474 "nvme_iov_md": false 00:05:09.474 }, 00:05:09.474 "memory_domains": [ 00:05:09.474 { 00:05:09.474 "dma_device_id": "system", 00:05:09.474 "dma_device_type": 1 00:05:09.474 }, 00:05:09.474 { 00:05:09.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.474 "dma_device_type": 2 00:05:09.474 } 00:05:09.474 ], 00:05:09.474 "driver_specific": {} 00:05:09.474 } 00:05:09.474 ]' 00:05:09.474 06:36:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:09.474 06:36:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:09.474 06:36:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:09.474 06:36:36 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.474 06:36:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.474 06:36:36 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.474 06:36:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:09.474 06:36:36 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.474 06:36:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.474 06:36:36 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.474 06:36:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:09.474 06:36:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:09.734 06:36:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:09.734 00:05:09.734 real 0m0.163s 00:05:09.734 user 0m0.095s 00:05:09.734 sys 0m0.028s 00:05:09.734 06:36:36 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.734 06:36:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.734 ************************************ 00:05:09.734 END TEST rpc_plugins 00:05:09.734 ************************************ 00:05:09.734 06:36:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:09.734 06:36:36 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.734 06:36:36 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.734 06:36:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.734 ************************************ 00:05:09.734 START TEST rpc_trace_cmd_test 00:05:09.734 ************************************ 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:09.734 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68115", 00:05:09.734 "tpoint_group_mask": "0x8", 00:05:09.734 "iscsi_conn": { 00:05:09.734 "mask": "0x2", 00:05:09.734 "tpoint_mask": "0x0" 00:05:09.734 }, 00:05:09.734 "scsi": { 00:05:09.734 "mask": "0x4", 00:05:09.734 "tpoint_mask": "0x0" 00:05:09.734 }, 00:05:09.734 "bdev": { 00:05:09.734 "mask": "0x8", 00:05:09.734 "tpoint_mask": "0xffffffffffffffff" 00:05:09.734 }, 00:05:09.734 "nvmf_rdma": { 00:05:09.734 "mask": "0x10", 00:05:09.734 "tpoint_mask": "0x0" 00:05:09.734 }, 00:05:09.734 "nvmf_tcp": { 00:05:09.734 "mask": "0x20", 00:05:09.734 "tpoint_mask": "0x0" 00:05:09.734 }, 00:05:09.734 "ftl": { 00:05:09.734 "mask": "0x40", 00:05:09.734 "tpoint_mask": "0x0" 00:05:09.734 }, 00:05:09.734 "blobfs": { 00:05:09.734 "mask": "0x80", 00:05:09.734 "tpoint_mask": "0x0" 00:05:09.734 }, 00:05:09.734 "dsa": { 00:05:09.734 "mask": "0x200", 00:05:09.734 "tpoint_mask": "0x0" 00:05:09.734 }, 00:05:09.734 "thread": { 00:05:09.734 "mask": "0x400", 00:05:09.734 "tpoint_mask": "0x0" 00:05:09.734 }, 00:05:09.734 "nvme_pcie": { 00:05:09.734 "mask": "0x800", 00:05:09.734 "tpoint_mask": "0x0" 00:05:09.734 }, 00:05:09.734 "iaa": { 00:05:09.734 "mask": "0x1000", 00:05:09.734 "tpoint_mask": "0x0" 00:05:09.734 }, 00:05:09.734 "nvme_tcp": { 00:05:09.734 "mask": "0x2000", 00:05:09.734 "tpoint_mask": "0x0" 00:05:09.734 }, 00:05:09.734 "bdev_nvme": { 00:05:09.734 "mask": "0x4000", 00:05:09.734 "tpoint_mask": "0x0" 00:05:09.734 }, 00:05:09.734 "sock": { 00:05:09.734 "mask": "0x8000", 00:05:09.734 "tpoint_mask": "0x0" 00:05:09.734 } 00:05:09.734 }' 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:09.734 06:36:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:09.994 06:36:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:09.994 00:05:09.994 real 0m0.232s 00:05:09.994 user 0m0.183s 00:05:09.994 sys 0m0.040s 00:05:09.994 06:36:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.994 06:36:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:09.994 ************************************ 00:05:09.994 END TEST rpc_trace_cmd_test 00:05:09.994 ************************************ 00:05:09.994 06:36:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:09.994 06:36:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:09.994 06:36:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:09.994 06:36:37 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.994 06:36:37 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.994 06:36:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.994 ************************************ 00:05:09.994 START TEST rpc_daemon_integrity 00:05:09.994 ************************************ 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.994 { 00:05:09.994 "name": "Malloc2", 00:05:09.994 "aliases": [ 00:05:09.994 "f7e287e2-4906-4590-be01-2463a6d29266" 00:05:09.994 ], 00:05:09.994 "product_name": "Malloc disk", 00:05:09.994 "block_size": 512, 00:05:09.994 "num_blocks": 16384, 00:05:09.994 "uuid": "f7e287e2-4906-4590-be01-2463a6d29266", 00:05:09.994 "assigned_rate_limits": { 00:05:09.994 "rw_ios_per_sec": 0, 00:05:09.994 "rw_mbytes_per_sec": 0, 00:05:09.994 "r_mbytes_per_sec": 0, 00:05:09.994 "w_mbytes_per_sec": 0 00:05:09.994 }, 00:05:09.994 "claimed": false, 00:05:09.994 "zoned": false, 00:05:09.994 "supported_io_types": { 00:05:09.994 "read": true, 00:05:09.994 "write": true, 00:05:09.994 "unmap": true, 00:05:09.994 "flush": true, 00:05:09.994 "reset": true, 00:05:09.994 "nvme_admin": false, 00:05:09.994 "nvme_io": false, 00:05:09.994 "nvme_io_md": false, 00:05:09.994 "write_zeroes": true, 00:05:09.994 "zcopy": true, 00:05:09.994 "get_zone_info": false, 00:05:09.994 "zone_management": false, 00:05:09.994 "zone_append": false, 00:05:09.994 "compare": false, 00:05:09.994 "compare_and_write": false, 00:05:09.994 "abort": true, 00:05:09.994 "seek_hole": false, 00:05:09.994 "seek_data": false, 00:05:09.994 "copy": true, 00:05:09.994 "nvme_iov_md": false 00:05:09.994 }, 00:05:09.994 "memory_domains": [ 00:05:09.994 { 00:05:09.994 "dma_device_id": "system", 00:05:09.994 "dma_device_type": 1 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.994 "dma_device_type": 2 00:05:09.994 } 00:05:09.994 ], 00:05:09.994 "driver_specific": {} 00:05:09.994 } 00:05:09.994 ]' 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.994 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:09.995 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.995 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.995 [2024-08-14 06:36:37.231766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:09.995 [2024-08-14 06:36:37.231862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.995 [2024-08-14 06:36:37.231891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:05:09.995 [2024-08-14 06:36:37.231907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.995 [2024-08-14 06:36:37.234472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.995 [2024-08-14 06:36:37.234519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.995 Passthru0 00:05:09.995 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:09.995 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.995 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:09.995 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.254 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:10.254 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:10.254 { 00:05:10.254 "name": "Malloc2", 00:05:10.254 "aliases": [ 00:05:10.254 "f7e287e2-4906-4590-be01-2463a6d29266" 00:05:10.254 ], 00:05:10.254 "product_name": "Malloc disk", 00:05:10.254 "block_size": 512, 00:05:10.254 "num_blocks": 16384, 00:05:10.254 "uuid": "f7e287e2-4906-4590-be01-2463a6d29266", 00:05:10.254 "assigned_rate_limits": { 00:05:10.254 "rw_ios_per_sec": 0, 00:05:10.254 "rw_mbytes_per_sec": 0, 00:05:10.254 "r_mbytes_per_sec": 0, 00:05:10.254 "w_mbytes_per_sec": 0 00:05:10.254 }, 00:05:10.254 "claimed": true, 00:05:10.254 "claim_type": "exclusive_write", 00:05:10.254 "zoned": false, 00:05:10.254 "supported_io_types": { 00:05:10.254 "read": true, 00:05:10.254 "write": true, 00:05:10.254 "unmap": true, 00:05:10.254 "flush": true, 00:05:10.254 "reset": true, 00:05:10.254 "nvme_admin": false, 00:05:10.254 "nvme_io": false, 00:05:10.254 "nvme_io_md": false, 00:05:10.254 "write_zeroes": true, 00:05:10.254 "zcopy": true, 00:05:10.254 "get_zone_info": false, 00:05:10.254 "zone_management": false, 00:05:10.254 "zone_append": false, 00:05:10.254 "compare": false, 00:05:10.254 "compare_and_write": false, 00:05:10.254 "abort": true, 00:05:10.254 "seek_hole": false, 00:05:10.254 "seek_data": false, 00:05:10.254 "copy": true, 00:05:10.254 "nvme_iov_md": false 00:05:10.254 }, 00:05:10.254 "memory_domains": [ 00:05:10.254 { 00:05:10.254 "dma_device_id": "system", 00:05:10.254 "dma_device_type": 1 00:05:10.254 }, 00:05:10.254 { 00:05:10.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.254 "dma_device_type": 2 00:05:10.254 } 00:05:10.254 ], 00:05:10.254 "driver_specific": {} 00:05:10.254 }, 00:05:10.254 { 00:05:10.254 "name": "Passthru0", 00:05:10.254 "aliases": [ 00:05:10.254 "afc8c853-6e16-5f6c-a1c4-a6f29ead1a02" 00:05:10.254 ], 00:05:10.254 "product_name": "passthru", 00:05:10.254 "block_size": 512, 00:05:10.254 "num_blocks": 16384, 00:05:10.254 "uuid": "afc8c853-6e16-5f6c-a1c4-a6f29ead1a02", 00:05:10.254 "assigned_rate_limits": { 00:05:10.254 "rw_ios_per_sec": 0, 00:05:10.254 "rw_mbytes_per_sec": 0, 00:05:10.254 "r_mbytes_per_sec": 0, 00:05:10.254 "w_mbytes_per_sec": 0 00:05:10.254 }, 00:05:10.254 "claimed": false, 00:05:10.254 "zoned": false, 00:05:10.254 "supported_io_types": { 00:05:10.254 "read": true, 00:05:10.254 "write": true, 00:05:10.254 "unmap": true, 00:05:10.254 "flush": true, 00:05:10.254 "reset": true, 00:05:10.254 "nvme_admin": false, 00:05:10.254 "nvme_io": false, 00:05:10.254 "nvme_io_md": false, 00:05:10.254 "write_zeroes": true, 00:05:10.254 "zcopy": true, 00:05:10.254 "get_zone_info": false, 00:05:10.254 "zone_management": false, 00:05:10.254 "zone_append": false, 00:05:10.254 "compare": false, 00:05:10.254 "compare_and_write": false, 00:05:10.254 "abort": true, 00:05:10.254 "seek_hole": false, 00:05:10.254 "seek_data": false, 00:05:10.254 "copy": true, 00:05:10.254 "nvme_iov_md": false 00:05:10.254 }, 00:05:10.254 "memory_domains": [ 00:05:10.254 { 00:05:10.254 "dma_device_id": "system", 00:05:10.254 "dma_device_type": 1 00:05:10.254 }, 00:05:10.254 { 00:05:10.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.254 "dma_device_type": 2 00:05:10.254 } 00:05:10.254 ], 00:05:10.254 "driver_specific": { 00:05:10.254 "passthru": { 00:05:10.254 "name": "Passthru0", 00:05:10.254 "base_bdev_name": "Malloc2" 00:05:10.254 } 00:05:10.254 } 00:05:10.254 } 00:05:10.254 ]' 00:05:10.254 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:10.254 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.255 00:05:10.255 real 0m0.296s 00:05:10.255 user 0m0.168s 00:05:10.255 sys 0m0.054s 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.255 06:36:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.255 ************************************ 00:05:10.255 END TEST rpc_daemon_integrity 00:05:10.255 ************************************ 00:05:10.255 06:36:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:10.255 06:36:37 rpc -- rpc/rpc.sh@84 -- # killprocess 68115 00:05:10.255 06:36:37 rpc -- common/autotest_common.sh@946 -- # '[' -z 68115 ']' 00:05:10.255 06:36:37 rpc -- common/autotest_common.sh@950 -- # kill -0 68115 00:05:10.255 06:36:37 rpc -- common/autotest_common.sh@951 -- # uname 00:05:10.255 06:36:37 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:10.255 06:36:37 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68115 00:05:10.255 killing process with pid 68115 00:05:10.255 06:36:37 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:10.255 06:36:37 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:10.255 06:36:37 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68115' 00:05:10.255 06:36:37 rpc -- common/autotest_common.sh@965 -- # kill 68115 00:05:10.255 06:36:37 rpc -- common/autotest_common.sh@970 -- # wait 68115 00:05:10.823 00:05:10.823 real 0m2.736s 00:05:10.823 user 0m3.331s 00:05:10.823 sys 0m0.812s 00:05:10.823 06:36:37 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.823 06:36:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.823 ************************************ 00:05:10.823 END TEST rpc 00:05:10.823 ************************************ 00:05:10.823 06:36:37 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:10.823 06:36:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.823 06:36:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.823 06:36:37 -- common/autotest_common.sh@10 -- # set +x 00:05:10.823 ************************************ 00:05:10.823 START TEST skip_rpc 00:05:10.823 ************************************ 00:05:10.823 06:36:37 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:10.823 * Looking for test storage... 00:05:10.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:10.823 06:36:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:10.823 06:36:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:10.823 06:36:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:10.823 06:36:38 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.823 06:36:38 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.823 06:36:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.823 ************************************ 00:05:10.823 START TEST skip_rpc 00:05:10.823 ************************************ 00:05:10.823 06:36:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:10.823 06:36:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:10.823 06:36:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=68313 00:05:10.823 06:36:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.823 06:36:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:11.082 [2024-08-14 06:36:38.154620] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:05:11.082 [2024-08-14 06:36:38.154784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68313 ] 00:05:11.082 [2024-08-14 06:36:38.301972] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.341 [2024-08-14 06:36:38.354546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@646 -- # local es=0 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # rpc_cmd spdk_get_version 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # es=1 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 68313 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 68313 ']' 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 68313 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68313 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68313' 00:05:16.628 killing process with pid 68313 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 68313 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 68313 00:05:16.628 00:05:16.628 real 0m5.459s 00:05:16.628 user 0m5.055s 00:05:16.628 sys 0m0.326s 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.628 06:36:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.628 ************************************ 00:05:16.628 END TEST skip_rpc 00:05:16.628 ************************************ 00:05:16.628 06:36:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:16.628 06:36:43 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.628 06:36:43 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.628 06:36:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.628 ************************************ 00:05:16.628 START TEST skip_rpc_with_json 00:05:16.628 ************************************ 00:05:16.628 06:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:16.628 06:36:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:16.628 06:36:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=68397 00:05:16.628 06:36:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.628 06:36:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.628 06:36:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 68397 00:05:16.628 06:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 68397 ']' 00:05:16.628 06:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.628 06:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:16.628 06:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.628 06:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:16.628 06:36:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.628 [2024-08-14 06:36:43.683463] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:05:16.628 [2024-08-14 06:36:43.683728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68397 ] 00:05:16.628 [2024-08-14 06:36:43.832135] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.889 [2024-08-14 06:36:43.884429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.457 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:17.457 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:17.457 06:36:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:17.457 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:17.457 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.457 [2024-08-14 06:36:44.542310] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:17.457 request: 00:05:17.457 { 00:05:17.457 "trtype": "tcp", 00:05:17.457 "method": "nvmf_get_transports", 00:05:17.457 "req_id": 1 00:05:17.457 } 00:05:17.457 Got JSON-RPC error response 00:05:17.457 response: 00:05:17.457 { 00:05:17.457 "code": -19, 00:05:17.457 "message": "No such device" 00:05:17.457 } 00:05:17.457 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:05:17.457 06:36:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:17.457 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:17.457 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.457 [2024-08-14 06:36:44.554395] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.457 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:17.457 06:36:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:17.457 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:17.457 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.723 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:17.723 06:36:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:17.723 { 00:05:17.723 "subsystems": [ 00:05:17.723 { 00:05:17.723 "subsystem": "fsdev", 00:05:17.723 "config": [ 00:05:17.723 { 00:05:17.723 "method": "fsdev_set_opts", 00:05:17.723 "params": { 00:05:17.723 "fsdev_io_pool_size": 65535, 00:05:17.723 "fsdev_io_cache_size": 256 00:05:17.723 } 00:05:17.723 } 00:05:17.723 ] 00:05:17.723 }, 00:05:17.723 { 00:05:17.723 "subsystem": "keyring", 00:05:17.723 "config": [] 00:05:17.723 }, 00:05:17.723 { 00:05:17.723 "subsystem": "iobuf", 00:05:17.723 "config": [ 00:05:17.723 { 00:05:17.723 "method": "iobuf_set_options", 00:05:17.723 "params": { 00:05:17.723 "small_pool_count": 8192, 00:05:17.723 "large_pool_count": 1024, 00:05:17.723 "small_bufsize": 8192, 00:05:17.723 "large_bufsize": 135168 00:05:17.723 } 00:05:17.723 } 00:05:17.723 ] 00:05:17.723 }, 00:05:17.723 { 00:05:17.723 "subsystem": "sock", 00:05:17.723 "config": [ 00:05:17.723 { 00:05:17.723 "method": "sock_set_default_impl", 00:05:17.723 "params": { 00:05:17.723 "impl_name": "posix" 00:05:17.723 } 00:05:17.723 }, 00:05:17.723 { 00:05:17.723 "method": "sock_impl_set_options", 00:05:17.723 "params": { 00:05:17.723 "impl_name": "ssl", 00:05:17.723 "recv_buf_size": 4096, 00:05:17.723 "send_buf_size": 4096, 00:05:17.723 "enable_recv_pipe": true, 00:05:17.723 "enable_quickack": false, 00:05:17.723 "enable_placement_id": 0, 00:05:17.723 "enable_zerocopy_send_server": true, 00:05:17.723 "enable_zerocopy_send_client": false, 00:05:17.723 "zerocopy_threshold": 0, 00:05:17.723 "tls_version": 0, 00:05:17.723 "enable_ktls": false 00:05:17.723 } 00:05:17.723 }, 00:05:17.723 { 00:05:17.723 "method": "sock_impl_set_options", 00:05:17.723 "params": { 00:05:17.723 "impl_name": "posix", 00:05:17.723 "recv_buf_size": 2097152, 00:05:17.723 "send_buf_size": 2097152, 00:05:17.723 "enable_recv_pipe": true, 00:05:17.723 "enable_quickack": false, 00:05:17.723 "enable_placement_id": 0, 00:05:17.723 "enable_zerocopy_send_server": true, 00:05:17.723 "enable_zerocopy_send_client": false, 00:05:17.723 "zerocopy_threshold": 0, 00:05:17.723 "tls_version": 0, 00:05:17.723 "enable_ktls": false 00:05:17.723 } 00:05:17.723 } 00:05:17.723 ] 00:05:17.723 }, 00:05:17.723 { 00:05:17.723 "subsystem": "vmd", 00:05:17.723 "config": [] 00:05:17.723 }, 00:05:17.723 { 00:05:17.723 "subsystem": "accel", 00:05:17.723 "config": [ 00:05:17.723 { 00:05:17.723 "method": "accel_set_options", 00:05:17.723 "params": { 00:05:17.723 "small_cache_size": 128, 00:05:17.723 "large_cache_size": 16, 00:05:17.723 "task_count": 2048, 00:05:17.723 "sequence_count": 2048, 00:05:17.723 "buf_count": 2048 00:05:17.723 } 00:05:17.723 } 00:05:17.723 ] 00:05:17.723 }, 00:05:17.723 { 00:05:17.723 "subsystem": "bdev", 00:05:17.723 "config": [ 00:05:17.723 { 00:05:17.723 "method": "bdev_set_options", 00:05:17.723 "params": { 00:05:17.723 "bdev_io_pool_size": 65535, 00:05:17.723 "bdev_io_cache_size": 256, 00:05:17.723 "bdev_auto_examine": true, 00:05:17.723 "iobuf_small_cache_size": 128, 00:05:17.723 "iobuf_large_cache_size": 16 00:05:17.723 } 00:05:17.723 }, 00:05:17.723 { 00:05:17.723 "method": "bdev_raid_set_options", 00:05:17.723 "params": { 00:05:17.723 "process_window_size_kb": 1024, 00:05:17.723 "process_max_bandwidth_mb_sec": 0 00:05:17.723 } 00:05:17.723 }, 00:05:17.723 { 00:05:17.723 "method": "bdev_iscsi_set_options", 00:05:17.723 "params": { 00:05:17.723 "timeout_sec": 30 00:05:17.724 } 00:05:17.724 }, 00:05:17.724 { 00:05:17.724 "method": "bdev_nvme_set_options", 00:05:17.724 "params": { 00:05:17.724 "action_on_timeout": "none", 00:05:17.724 "timeout_us": 0, 00:05:17.724 "timeout_admin_us": 0, 00:05:17.724 "keep_alive_timeout_ms": 10000, 00:05:17.724 "arbitration_burst": 0, 00:05:17.724 "low_priority_weight": 0, 00:05:17.724 "medium_priority_weight": 0, 00:05:17.724 "high_priority_weight": 0, 00:05:17.724 "nvme_adminq_poll_period_us": 10000, 00:05:17.724 "nvme_ioq_poll_period_us": 0, 00:05:17.724 "io_queue_requests": 0, 00:05:17.724 "delay_cmd_submit": true, 00:05:17.724 "transport_retry_count": 4, 00:05:17.724 "bdev_retry_count": 3, 00:05:17.724 "transport_ack_timeout": 0, 00:05:17.724 "ctrlr_loss_timeout_sec": 0, 00:05:17.724 "reconnect_delay_sec": 0, 00:05:17.724 "fast_io_fail_timeout_sec": 0, 00:05:17.724 "disable_auto_failback": false, 00:05:17.724 "generate_uuids": false, 00:05:17.724 "transport_tos": 0, 00:05:17.724 "nvme_error_stat": false, 00:05:17.724 "rdma_srq_size": 0, 00:05:17.724 "io_path_stat": false, 00:05:17.724 "allow_accel_sequence": false, 00:05:17.724 "rdma_max_cq_size": 0, 00:05:17.724 "rdma_cm_event_timeout_ms": 0, 00:05:17.724 "dhchap_digests": [ 00:05:17.724 "sha256", 00:05:17.724 "sha384", 00:05:17.724 "sha512" 00:05:17.724 ], 00:05:17.724 "dhchap_dhgroups": [ 00:05:17.724 "null", 00:05:17.724 "ffdhe2048", 00:05:17.724 "ffdhe3072", 00:05:17.724 "ffdhe4096", 00:05:17.724 "ffdhe6144", 00:05:17.724 "ffdhe8192" 00:05:17.724 ] 00:05:17.724 } 00:05:17.724 }, 00:05:17.724 { 00:05:17.724 "method": "bdev_nvme_set_hotplug", 00:05:17.724 "params": { 00:05:17.724 "period_us": 100000, 00:05:17.724 "enable": false 00:05:17.724 } 00:05:17.724 }, 00:05:17.724 { 00:05:17.724 "method": "bdev_wait_for_examine" 00:05:17.724 } 00:05:17.724 ] 00:05:17.724 }, 00:05:17.724 { 00:05:17.724 "subsystem": "scsi", 00:05:17.724 "config": null 00:05:17.724 }, 00:05:17.724 { 00:05:17.724 "subsystem": "scheduler", 00:05:17.724 "config": [ 00:05:17.724 { 00:05:17.724 "method": "framework_set_scheduler", 00:05:17.724 "params": { 00:05:17.724 "name": "static" 00:05:17.724 } 00:05:17.724 } 00:05:17.724 ] 00:05:17.724 }, 00:05:17.724 { 00:05:17.724 "subsystem": "vhost_scsi", 00:05:17.724 "config": [] 00:05:17.724 }, 00:05:17.724 { 00:05:17.724 "subsystem": "vhost_blk", 00:05:17.724 "config": [] 00:05:17.724 }, 00:05:17.724 { 00:05:17.724 "subsystem": "ublk", 00:05:17.724 "config": [] 00:05:17.724 }, 00:05:17.724 { 00:05:17.724 "subsystem": "nbd", 00:05:17.724 "config": [] 00:05:17.724 }, 00:05:17.724 { 00:05:17.724 "subsystem": "nvmf", 00:05:17.724 "config": [ 00:05:17.724 { 00:05:17.724 "method": "nvmf_set_config", 00:05:17.724 "params": { 00:05:17.724 "discovery_filter": "match_any", 00:05:17.724 "admin_cmd_passthru": { 00:05:17.724 "identify_ctrlr": false 00:05:17.724 } 00:05:17.724 } 00:05:17.724 }, 00:05:17.724 { 00:05:17.724 "method": "nvmf_set_max_subsystems", 00:05:17.724 "params": { 00:05:17.724 "max_subsystems": 1024 00:05:17.724 } 00:05:17.724 }, 00:05:17.724 { 00:05:17.724 "method": "nvmf_set_crdt", 00:05:17.724 "params": { 00:05:17.724 "crdt1": 0, 00:05:17.724 "crdt2": 0, 00:05:17.724 "crdt3": 0 00:05:17.724 } 00:05:17.724 }, 00:05:17.724 { 00:05:17.724 "method": "nvmf_create_transport", 00:05:17.724 "params": { 00:05:17.724 "trtype": "TCP", 00:05:17.724 "max_queue_depth": 128, 00:05:17.724 "max_io_qpairs_per_ctrlr": 127, 00:05:17.724 "in_capsule_data_size": 4096, 00:05:17.724 "max_io_size": 131072, 00:05:17.724 "io_unit_size": 131072, 00:05:17.724 "max_aq_depth": 128, 00:05:17.724 "num_shared_buffers": 511, 00:05:17.724 "buf_cache_size": 4294967295, 00:05:17.724 "dif_insert_or_strip": false, 00:05:17.724 "zcopy": false, 00:05:17.724 "c2h_success": true, 00:05:17.724 "sock_priority": 0, 00:05:17.724 "abort_timeout_sec": 1, 00:05:17.724 "ack_timeout": 0, 00:05:17.724 "data_wr_pool_size": 0 00:05:17.724 } 00:05:17.724 } 00:05:17.724 ] 00:05:17.724 }, 00:05:17.724 { 00:05:17.724 "subsystem": "iscsi", 00:05:17.724 "config": [ 00:05:17.724 { 00:05:17.724 "method": "iscsi_set_options", 00:05:17.724 "params": { 00:05:17.724 "node_base": "iqn.2016-06.io.spdk", 00:05:17.724 "max_sessions": 128, 00:05:17.724 "max_connections_per_session": 2, 00:05:17.724 "max_queue_depth": 64, 00:05:17.724 "default_time2wait": 2, 00:05:17.724 "default_time2retain": 20, 00:05:17.724 "first_burst_length": 8192, 00:05:17.724 "immediate_data": true, 00:05:17.724 "allow_duplicated_isid": false, 00:05:17.724 "error_recovery_level": 0, 00:05:17.724 "nop_timeout": 60, 00:05:17.724 "nop_in_interval": 30, 00:05:17.724 "disable_chap": false, 00:05:17.724 "require_chap": false, 00:05:17.724 "mutual_chap": false, 00:05:17.724 "chap_group": 0, 00:05:17.724 "max_large_datain_per_connection": 64, 00:05:17.724 "max_r2t_per_connection": 4, 00:05:17.724 "pdu_pool_size": 36864, 00:05:17.724 "immediate_data_pool_size": 16384, 00:05:17.724 "data_out_pool_size": 2048 00:05:17.724 } 00:05:17.724 } 00:05:17.724 ] 00:05:17.724 } 00:05:17.724 ] 00:05:17.724 } 00:05:17.724 06:36:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:17.724 06:36:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 68397 00:05:17.724 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 68397 ']' 00:05:17.724 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 68397 00:05:17.724 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:17.724 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:17.724 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68397 00:05:17.724 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:17.724 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:17.724 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68397' 00:05:17.724 killing process with pid 68397 00:05:17.724 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 68397 00:05:17.724 06:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 68397 00:05:18.001 06:36:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=68425 00:05:18.001 06:36:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:18.001 06:36:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:23.274 06:36:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 68425 00:05:23.274 06:36:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 68425 ']' 00:05:23.274 06:36:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 68425 00:05:23.274 06:36:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:23.274 06:36:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:23.274 06:36:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68425 00:05:23.274 killing process with pid 68425 00:05:23.274 06:36:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:23.275 06:36:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:23.275 06:36:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68425' 00:05:23.275 06:36:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 68425 00:05:23.275 06:36:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 68425 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:23.533 ************************************ 00:05:23.533 END TEST skip_rpc_with_json 00:05:23.533 ************************************ 00:05:23.533 00:05:23.533 real 0m7.034s 00:05:23.533 user 0m6.610s 00:05:23.533 sys 0m0.733s 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.533 06:36:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:23.533 06:36:50 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.533 06:36:50 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.533 06:36:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.533 ************************************ 00:05:23.533 START TEST skip_rpc_with_delay 00:05:23.533 ************************************ 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # local es=0 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:23.533 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.533 [2024-08-14 06:36:50.770719] app.c: 833:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:23.533 [2024-08-14 06:36:50.770871] app.c: 712:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:23.793 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # es=1 00:05:23.793 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:05:23.793 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:05:23.793 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:05:23.793 00:05:23.793 real 0m0.148s 00:05:23.793 user 0m0.073s 00:05:23.793 sys 0m0.073s 00:05:23.793 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.793 06:36:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:23.793 ************************************ 00:05:23.793 END TEST skip_rpc_with_delay 00:05:23.793 ************************************ 00:05:23.793 06:36:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:23.793 06:36:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:23.793 06:36:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:23.793 06:36:50 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.793 06:36:50 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.793 06:36:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.793 ************************************ 00:05:23.793 START TEST exit_on_failed_rpc_init 00:05:23.793 ************************************ 00:05:23.793 06:36:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:23.793 06:36:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=68542 00:05:23.793 06:36:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.793 06:36:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 68542 00:05:23.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.793 06:36:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 68542 ']' 00:05:23.793 06:36:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.793 06:36:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:23.793 06:36:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.793 06:36:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:23.793 06:36:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.793 [2024-08-14 06:36:50.996117] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:05:23.793 [2024-08-14 06:36:50.996273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68542 ] 00:05:24.053 [2024-08-14 06:36:51.143261] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.053 [2024-08-14 06:36:51.196515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.622 06:36:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:24.622 06:36:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:24.622 06:36:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.622 06:36:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.622 06:36:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # local es=0 00:05:24.622 06:36:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.622 06:36:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.622 06:36:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:24.622 06:36:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.882 06:36:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:24.882 06:36:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.882 06:36:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:24.882 06:36:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.882 06:36:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:24.882 06:36:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.882 [2024-08-14 06:36:51.982439] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:05:24.882 [2024-08-14 06:36:51.982677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68560 ] 00:05:25.141 [2024-08-14 06:36:52.134400] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.141 [2024-08-14 06:36:52.186876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.141 [2024-08-14 06:36:52.187067] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:25.141 [2024-08-14 06:36:52.187332] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:25.141 [2024-08-14 06:36:52.187457] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # es=234 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@658 -- # es=106 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # case "$es" in 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@666 -- # es=1 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 68542 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 68542 ']' 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 68542 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68542 00:05:25.141 killing process with pid 68542 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68542' 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 68542 00:05:25.141 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 68542 00:05:25.725 00:05:25.725 real 0m1.870s 00:05:25.725 user 0m2.075s 00:05:25.725 sys 0m0.527s 00:05:25.725 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.725 06:36:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.725 ************************************ 00:05:25.725 END TEST exit_on_failed_rpc_init 00:05:25.725 ************************************ 00:05:25.725 06:36:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:25.725 00:05:25.725 real 0m14.898s 00:05:25.725 user 0m13.939s 00:05:25.725 sys 0m1.937s 00:05:25.725 ************************************ 00:05:25.725 END TEST skip_rpc 00:05:25.725 ************************************ 00:05:25.725 06:36:52 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.725 06:36:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.725 06:36:52 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:25.725 06:36:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.725 06:36:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.725 06:36:52 -- common/autotest_common.sh@10 -- # set +x 00:05:25.725 ************************************ 00:05:25.725 START TEST rpc_client 00:05:25.725 ************************************ 00:05:25.725 06:36:52 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:25.985 * Looking for test storage... 00:05:25.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:25.985 06:36:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:25.985 OK 00:05:25.985 06:36:53 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:25.985 00:05:25.985 real 0m0.183s 00:05:25.985 user 0m0.071s 00:05:25.985 sys 0m0.122s 00:05:25.985 ************************************ 00:05:25.985 END TEST rpc_client 00:05:25.985 ************************************ 00:05:25.985 06:36:53 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.985 06:36:53 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:25.985 06:36:53 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:25.985 06:36:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.985 06:36:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.985 06:36:53 -- common/autotest_common.sh@10 -- # set +x 00:05:25.985 ************************************ 00:05:25.985 START TEST json_config 00:05:25.985 ************************************ 00:05:25.985 06:36:53 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:25.985 06:36:53 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4260025c-9f07-406e-a2ce-e26fb147f69f 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=4260025c-9f07-406e-a2ce-e26fb147f69f 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.985 06:36:53 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:26.245 06:36:53 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.245 06:36:53 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.245 06:36:53 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.245 06:36:53 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.245 06:36:53 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.245 06:36:53 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.245 06:36:53 json_config -- paths/export.sh@5 -- # export PATH 00:05:26.245 06:36:53 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.245 06:36:53 json_config -- nvmf/common.sh@47 -- # : 0 00:05:26.246 06:36:53 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:26.246 06:36:53 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:26.246 06:36:53 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.246 06:36:53 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.246 06:36:53 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.246 06:36:53 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:26.246 06:36:53 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:26.246 06:36:53 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:26.246 06:36:53 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:26.246 06:36:53 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:26.246 06:36:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:26.246 06:36:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:26.246 06:36:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:26.246 06:36:53 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:26.246 WARNING: No tests are enabled so not running JSON configuration tests 00:05:26.246 06:36:53 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:26.246 00:05:26.246 real 0m0.129s 00:05:26.246 user 0m0.068s 00:05:26.246 sys 0m0.058s 00:05:26.246 06:36:53 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.246 ************************************ 00:05:26.246 END TEST json_config 00:05:26.246 ************************************ 00:05:26.246 06:36:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.246 06:36:53 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:26.246 06:36:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.246 06:36:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.246 06:36:53 -- common/autotest_common.sh@10 -- # set +x 00:05:26.246 ************************************ 00:05:26.246 START TEST json_config_extra_key 00:05:26.246 ************************************ 00:05:26.246 06:36:53 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:26.246 06:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4260025c-9f07-406e-a2ce-e26fb147f69f 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=4260025c-9f07-406e-a2ce-e26fb147f69f 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:26.246 06:36:53 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.246 06:36:53 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.246 06:36:53 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.246 06:36:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.246 06:36:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.246 06:36:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.246 06:36:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:26.246 06:36:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:26.246 06:36:53 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:26.246 06:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:26.246 06:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:26.246 06:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:26.246 06:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:26.246 06:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:26.246 06:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:26.246 06:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:26.246 06:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:26.246 06:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:26.246 06:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.246 06:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:26.246 INFO: launching applications... 00:05:26.246 06:36:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:26.246 06:36:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:26.246 06:36:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:26.246 06:36:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.246 06:36:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.246 06:36:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.246 06:36:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.246 06:36:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.246 06:36:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=68713 00:05:26.246 06:36:53 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:26.246 Waiting for target to run... 00:05:26.246 06:36:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.246 06:36:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 68713 /var/tmp/spdk_tgt.sock 00:05:26.246 06:36:53 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 68713 ']' 00:05:26.246 06:36:53 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.246 06:36:53 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:26.246 06:36:53 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.246 06:36:53 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:26.246 06:36:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:26.504 [2024-08-14 06:36:53.533931] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:05:26.504 [2024-08-14 06:36:53.534591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68713 ] 00:05:26.764 [2024-08-14 06:36:53.895397] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.764 [2024-08-14 06:36:53.928511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.333 06:36:54 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:27.333 00:05:27.333 INFO: shutting down applications... 00:05:27.333 06:36:54 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:27.333 06:36:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:27.333 06:36:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:27.333 06:36:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:27.333 06:36:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:27.333 06:36:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:27.333 06:36:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 68713 ]] 00:05:27.333 06:36:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 68713 00:05:27.333 06:36:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:27.333 06:36:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.333 06:36:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 68713 00:05:27.333 06:36:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.902 SPDK target shutdown done 00:05:27.902 Success 00:05:27.902 06:36:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.902 06:36:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.902 06:36:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 68713 00:05:27.902 06:36:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:27.902 06:36:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:27.902 06:36:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:27.902 06:36:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:27.902 06:36:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:27.902 00:05:27.902 real 0m1.607s 00:05:27.902 user 0m1.436s 00:05:27.902 sys 0m0.436s 00:05:27.902 06:36:54 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.902 06:36:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.902 ************************************ 00:05:27.902 END TEST json_config_extra_key 00:05:27.902 ************************************ 00:05:27.902 06:36:54 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.902 06:36:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.902 06:36:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.902 06:36:54 -- common/autotest_common.sh@10 -- # set +x 00:05:27.902 ************************************ 00:05:27.902 START TEST alias_rpc 00:05:27.902 ************************************ 00:05:27.902 06:36:54 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.902 * Looking for test storage... 00:05:27.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:27.902 06:36:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:27.902 06:36:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68784 00:05:27.902 06:36:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.902 06:36:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68784 00:05:27.902 06:36:55 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 68784 ']' 00:05:27.902 06:36:55 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.902 06:36:55 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:27.902 06:36:55 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.902 06:36:55 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:27.902 06:36:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.161 [2024-08-14 06:36:55.216962] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:05:28.161 [2024-08-14 06:36:55.217207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68784 ] 00:05:28.161 [2024-08-14 06:36:55.363705] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.420 [2024-08-14 06:36:55.416606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.989 06:36:56 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:28.989 06:36:56 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:28.989 06:36:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:29.248 06:36:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68784 00:05:29.248 06:36:56 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 68784 ']' 00:05:29.248 06:36:56 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 68784 00:05:29.248 06:36:56 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:29.248 06:36:56 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:29.248 06:36:56 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68784 00:05:29.248 killing process with pid 68784 00:05:29.248 06:36:56 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:29.248 06:36:56 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:29.248 06:36:56 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68784' 00:05:29.248 06:36:56 alias_rpc -- common/autotest_common.sh@965 -- # kill 68784 00:05:29.248 06:36:56 alias_rpc -- common/autotest_common.sh@970 -- # wait 68784 00:05:29.508 ************************************ 00:05:29.508 END TEST alias_rpc 00:05:29.508 ************************************ 00:05:29.508 00:05:29.508 real 0m1.755s 00:05:29.508 user 0m1.871s 00:05:29.508 sys 0m0.468s 00:05:29.508 06:36:56 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.508 06:36:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.768 06:36:56 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:29.768 06:36:56 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:29.768 06:36:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.768 06:36:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.768 06:36:56 -- common/autotest_common.sh@10 -- # set +x 00:05:29.768 ************************************ 00:05:29.768 START TEST spdkcli_tcp 00:05:29.768 ************************************ 00:05:29.768 06:36:56 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:29.768 * Looking for test storage... 00:05:29.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:29.768 06:36:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:29.768 06:36:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:29.768 06:36:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:29.768 06:36:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:29.768 06:36:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:29.768 06:36:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:29.768 06:36:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:29.768 06:36:56 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:29.768 06:36:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.768 06:36:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:29.768 06:36:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=68850 00:05:29.768 06:36:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 68850 00:05:29.768 06:36:56 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 68850 ']' 00:05:29.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.768 06:36:56 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.768 06:36:56 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.768 06:36:56 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.768 06:36:56 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.768 06:36:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.027 [2024-08-14 06:36:57.048985] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:05:30.027 [2024-08-14 06:36:57.049133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68850 ] 00:05:30.027 [2024-08-14 06:36:57.201495] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.027 [2024-08-14 06:36:57.255045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.027 [2024-08-14 06:36:57.255234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.964 06:36:57 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.964 06:36:57 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:30.964 06:36:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:30.964 06:36:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=68867 00:05:30.964 06:36:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:30.964 [ 00:05:30.964 "bdev_malloc_delete", 00:05:30.964 "bdev_malloc_create", 00:05:30.964 "bdev_null_resize", 00:05:30.964 "bdev_null_delete", 00:05:30.964 "bdev_null_create", 00:05:30.964 "bdev_nvme_cuse_unregister", 00:05:30.964 "bdev_nvme_cuse_register", 00:05:30.964 "bdev_opal_new_user", 00:05:30.964 "bdev_opal_set_lock_state", 00:05:30.964 "bdev_opal_delete", 00:05:30.964 "bdev_opal_get_info", 00:05:30.964 "bdev_opal_create", 00:05:30.964 "bdev_nvme_opal_revert", 00:05:30.964 "bdev_nvme_opal_init", 00:05:30.964 "bdev_nvme_send_cmd", 00:05:30.964 "bdev_nvme_get_path_iostat", 00:05:30.964 "bdev_nvme_get_mdns_discovery_info", 00:05:30.964 "bdev_nvme_stop_mdns_discovery", 00:05:30.964 "bdev_nvme_start_mdns_discovery", 00:05:30.964 "bdev_nvme_set_multipath_policy", 00:05:30.964 "bdev_nvme_set_preferred_path", 00:05:30.964 "bdev_nvme_get_io_paths", 00:05:30.964 "bdev_nvme_remove_error_injection", 00:05:30.964 "bdev_nvme_add_error_injection", 00:05:30.964 "bdev_nvme_get_discovery_info", 00:05:30.964 "bdev_nvme_stop_discovery", 00:05:30.964 "bdev_nvme_start_discovery", 00:05:30.964 "bdev_nvme_get_controller_health_info", 00:05:30.964 "bdev_nvme_disable_controller", 00:05:30.964 "bdev_nvme_enable_controller", 00:05:30.964 "bdev_nvme_reset_controller", 00:05:30.964 "bdev_nvme_get_transport_statistics", 00:05:30.964 "bdev_nvme_apply_firmware", 00:05:30.964 "bdev_nvme_detach_controller", 00:05:30.964 "bdev_nvme_get_controllers", 00:05:30.964 "bdev_nvme_attach_controller", 00:05:30.964 "bdev_nvme_set_hotplug", 00:05:30.964 "bdev_nvme_set_options", 00:05:30.964 "bdev_passthru_delete", 00:05:30.964 "bdev_passthru_create", 00:05:30.964 "bdev_lvol_set_parent_bdev", 00:05:30.964 "bdev_lvol_set_parent", 00:05:30.964 "bdev_lvol_check_shallow_copy", 00:05:30.964 "bdev_lvol_start_shallow_copy", 00:05:30.964 "bdev_lvol_grow_lvstore", 00:05:30.964 "bdev_lvol_get_lvols", 00:05:30.964 "bdev_lvol_get_lvstores", 00:05:30.964 "bdev_lvol_delete", 00:05:30.964 "bdev_lvol_set_read_only", 00:05:30.964 "bdev_lvol_resize", 00:05:30.964 "bdev_lvol_decouple_parent", 00:05:30.964 "bdev_lvol_inflate", 00:05:30.964 "bdev_lvol_rename", 00:05:30.964 "bdev_lvol_clone_bdev", 00:05:30.964 "bdev_lvol_clone", 00:05:30.964 "bdev_lvol_snapshot", 00:05:30.964 "bdev_lvol_create", 00:05:30.964 "bdev_lvol_delete_lvstore", 00:05:30.964 "bdev_lvol_rename_lvstore", 00:05:30.964 "bdev_lvol_create_lvstore", 00:05:30.965 "bdev_raid_set_options", 00:05:30.965 "bdev_raid_remove_base_bdev", 00:05:30.965 "bdev_raid_add_base_bdev", 00:05:30.965 "bdev_raid_delete", 00:05:30.965 "bdev_raid_create", 00:05:30.965 "bdev_raid_get_bdevs", 00:05:30.965 "bdev_error_inject_error", 00:05:30.965 "bdev_error_delete", 00:05:30.965 "bdev_error_create", 00:05:30.965 "bdev_split_delete", 00:05:30.965 "bdev_split_create", 00:05:30.965 "bdev_delay_delete", 00:05:30.965 "bdev_delay_create", 00:05:30.965 "bdev_delay_update_latency", 00:05:30.965 "bdev_zone_block_delete", 00:05:30.965 "bdev_zone_block_create", 00:05:30.965 "blobfs_create", 00:05:30.965 "blobfs_detect", 00:05:30.965 "blobfs_set_cache_size", 00:05:30.965 "bdev_aio_delete", 00:05:30.965 "bdev_aio_rescan", 00:05:30.965 "bdev_aio_create", 00:05:30.965 "bdev_ftl_set_property", 00:05:30.965 "bdev_ftl_get_properties", 00:05:30.965 "bdev_ftl_get_stats", 00:05:30.965 "bdev_ftl_unmap", 00:05:30.965 "bdev_ftl_unload", 00:05:30.965 "bdev_ftl_delete", 00:05:30.965 "bdev_ftl_load", 00:05:30.965 "bdev_ftl_create", 00:05:30.965 "bdev_virtio_attach_controller", 00:05:30.965 "bdev_virtio_scsi_get_devices", 00:05:30.965 "bdev_virtio_detach_controller", 00:05:30.965 "bdev_virtio_blk_set_hotplug", 00:05:30.965 "bdev_iscsi_delete", 00:05:30.965 "bdev_iscsi_create", 00:05:30.965 "bdev_iscsi_set_options", 00:05:30.965 "accel_error_inject_error", 00:05:30.965 "ioat_scan_accel_module", 00:05:30.965 "dsa_scan_accel_module", 00:05:30.965 "iaa_scan_accel_module", 00:05:30.965 "keyring_file_remove_key", 00:05:30.965 "keyring_file_add_key", 00:05:30.965 "keyring_linux_set_options", 00:05:30.965 "fsdev_aio_delete", 00:05:30.965 "fsdev_aio_create", 00:05:30.965 "iscsi_get_histogram", 00:05:30.965 "iscsi_enable_histogram", 00:05:30.965 "iscsi_set_options", 00:05:30.965 "iscsi_get_auth_groups", 00:05:30.965 "iscsi_auth_group_remove_secret", 00:05:30.965 "iscsi_auth_group_add_secret", 00:05:30.965 "iscsi_delete_auth_group", 00:05:30.965 "iscsi_create_auth_group", 00:05:30.965 "iscsi_set_discovery_auth", 00:05:30.965 "iscsi_get_options", 00:05:30.965 "iscsi_target_node_request_logout", 00:05:30.965 "iscsi_target_node_set_redirect", 00:05:30.965 "iscsi_target_node_set_auth", 00:05:30.965 "iscsi_target_node_add_lun", 00:05:30.965 "iscsi_get_stats", 00:05:30.965 "iscsi_get_connections", 00:05:30.965 "iscsi_portal_group_set_auth", 00:05:30.965 "iscsi_start_portal_group", 00:05:30.965 "iscsi_delete_portal_group", 00:05:30.965 "iscsi_create_portal_group", 00:05:30.965 "iscsi_get_portal_groups", 00:05:30.965 "iscsi_delete_target_node", 00:05:30.965 "iscsi_target_node_remove_pg_ig_maps", 00:05:30.965 "iscsi_target_node_add_pg_ig_maps", 00:05:30.965 "iscsi_create_target_node", 00:05:30.965 "iscsi_get_target_nodes", 00:05:30.965 "iscsi_delete_initiator_group", 00:05:30.965 "iscsi_initiator_group_remove_initiators", 00:05:30.965 "iscsi_initiator_group_add_initiators", 00:05:30.965 "iscsi_create_initiator_group", 00:05:30.965 "iscsi_get_initiator_groups", 00:05:30.965 "nvmf_set_crdt", 00:05:30.965 "nvmf_set_config", 00:05:30.965 "nvmf_set_max_subsystems", 00:05:30.965 "nvmf_stop_mdns_prr", 00:05:30.965 "nvmf_publish_mdns_prr", 00:05:30.965 "nvmf_subsystem_get_listeners", 00:05:30.965 "nvmf_subsystem_get_qpairs", 00:05:30.965 "nvmf_subsystem_get_controllers", 00:05:30.965 "nvmf_get_stats", 00:05:30.965 "nvmf_get_transports", 00:05:30.965 "nvmf_create_transport", 00:05:30.965 "nvmf_get_targets", 00:05:30.965 "nvmf_delete_target", 00:05:30.965 "nvmf_create_target", 00:05:30.965 "nvmf_subsystem_allow_any_host", 00:05:30.965 "nvmf_subsystem_remove_host", 00:05:30.965 "nvmf_subsystem_add_host", 00:05:30.965 "nvmf_ns_remove_host", 00:05:30.965 "nvmf_ns_add_host", 00:05:30.965 "nvmf_subsystem_remove_ns", 00:05:30.965 "nvmf_subsystem_add_ns", 00:05:30.965 "nvmf_subsystem_listener_set_ana_state", 00:05:30.965 "nvmf_discovery_get_referrals", 00:05:30.965 "nvmf_discovery_remove_referral", 00:05:30.965 "nvmf_discovery_add_referral", 00:05:30.965 "nvmf_subsystem_remove_listener", 00:05:30.965 "nvmf_subsystem_add_listener", 00:05:30.965 "nvmf_delete_subsystem", 00:05:30.965 "nvmf_create_subsystem", 00:05:30.965 "nvmf_get_subsystems", 00:05:30.965 "env_dpdk_get_mem_stats", 00:05:30.965 "nbd_get_disks", 00:05:30.965 "nbd_stop_disk", 00:05:30.965 "nbd_start_disk", 00:05:30.965 "ublk_recover_disk", 00:05:30.965 "ublk_get_disks", 00:05:30.965 "ublk_stop_disk", 00:05:30.965 "ublk_start_disk", 00:05:30.965 "ublk_destroy_target", 00:05:30.965 "ublk_create_target", 00:05:30.965 "virtio_blk_create_transport", 00:05:30.965 "virtio_blk_get_transports", 00:05:30.965 "vhost_controller_set_coalescing", 00:05:30.965 "vhost_get_controllers", 00:05:30.965 "vhost_delete_controller", 00:05:30.965 "vhost_create_blk_controller", 00:05:30.965 "vhost_scsi_controller_remove_target", 00:05:30.965 "vhost_scsi_controller_add_target", 00:05:30.965 "vhost_start_scsi_controller", 00:05:30.965 "vhost_create_scsi_controller", 00:05:30.965 "thread_set_cpumask", 00:05:30.965 "framework_get_governor", 00:05:30.965 "framework_get_scheduler", 00:05:30.965 "framework_set_scheduler", 00:05:30.965 "framework_get_reactors", 00:05:30.965 "thread_get_io_channels", 00:05:30.965 "thread_get_pollers", 00:05:30.965 "thread_get_stats", 00:05:30.965 "framework_monitor_context_switch", 00:05:30.965 "spdk_kill_instance", 00:05:30.965 "log_enable_timestamps", 00:05:30.965 "log_get_flags", 00:05:30.965 "log_clear_flag", 00:05:30.965 "log_set_flag", 00:05:30.965 "log_get_level", 00:05:30.965 "log_set_level", 00:05:30.965 "log_get_print_level", 00:05:30.965 "log_set_print_level", 00:05:30.965 "framework_enable_cpumask_locks", 00:05:30.965 "framework_disable_cpumask_locks", 00:05:30.965 "framework_wait_init", 00:05:30.965 "framework_start_init", 00:05:30.965 "scsi_get_devices", 00:05:30.965 "bdev_get_histogram", 00:05:30.965 "bdev_enable_histogram", 00:05:30.965 "bdev_set_qos_limit", 00:05:30.965 "bdev_set_qd_sampling_period", 00:05:30.965 "bdev_get_bdevs", 00:05:30.965 "bdev_reset_iostat", 00:05:30.965 "bdev_get_iostat", 00:05:30.965 "bdev_examine", 00:05:30.965 "bdev_wait_for_examine", 00:05:30.965 "bdev_set_options", 00:05:30.965 "accel_get_stats", 00:05:30.965 "accel_set_options", 00:05:30.965 "accel_set_driver", 00:05:30.965 "accel_crypto_key_destroy", 00:05:30.965 "accel_crypto_keys_get", 00:05:30.965 "accel_crypto_key_create", 00:05:30.965 "accel_assign_opc", 00:05:30.965 "accel_get_module_info", 00:05:30.965 "accel_get_opc_assignments", 00:05:30.965 "vmd_rescan", 00:05:30.965 "vmd_remove_device", 00:05:30.965 "vmd_enable", 00:05:30.965 "sock_get_default_impl", 00:05:30.965 "sock_set_default_impl", 00:05:30.965 "sock_impl_set_options", 00:05:30.965 "sock_impl_get_options", 00:05:30.965 "iobuf_get_stats", 00:05:30.965 "iobuf_set_options", 00:05:30.965 "keyring_get_keys", 00:05:30.965 "framework_get_pci_devices", 00:05:30.965 "framework_get_config", 00:05:30.965 "framework_get_subsystems", 00:05:30.965 "fsdev_set_opts", 00:05:30.965 "fsdev_get_opts", 00:05:30.965 "trace_get_info", 00:05:30.965 "trace_get_tpoint_group_mask", 00:05:30.965 "trace_disable_tpoint_group", 00:05:30.965 "trace_enable_tpoint_group", 00:05:30.965 "trace_clear_tpoint_mask", 00:05:30.965 "trace_set_tpoint_mask", 00:05:30.965 "notify_get_notifications", 00:05:30.965 "notify_get_types", 00:05:30.965 "spdk_get_version", 00:05:30.965 "rpc_get_methods" 00:05:30.965 ] 00:05:30.965 06:36:58 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:30.965 06:36:58 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.965 06:36:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.225 06:36:58 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:31.225 06:36:58 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 68850 00:05:31.225 06:36:58 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 68850 ']' 00:05:31.225 06:36:58 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 68850 00:05:31.225 06:36:58 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:31.225 06:36:58 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:31.225 06:36:58 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68850 00:05:31.225 06:36:58 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:31.225 06:36:58 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:31.226 06:36:58 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68850' 00:05:31.226 killing process with pid 68850 00:05:31.226 06:36:58 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 68850 00:05:31.226 06:36:58 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 68850 00:05:31.485 00:05:31.485 real 0m1.841s 00:05:31.485 user 0m3.276s 00:05:31.485 sys 0m0.524s 00:05:31.485 06:36:58 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.485 06:36:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.485 ************************************ 00:05:31.485 END TEST spdkcli_tcp 00:05:31.485 ************************************ 00:05:31.485 06:36:58 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.485 06:36:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.485 06:36:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.485 06:36:58 -- common/autotest_common.sh@10 -- # set +x 00:05:31.485 ************************************ 00:05:31.485 START TEST dpdk_mem_utility 00:05:31.485 ************************************ 00:05:31.485 06:36:58 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.745 * Looking for test storage... 00:05:31.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:31.745 06:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:31.745 06:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68942 00:05:31.745 06:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.745 06:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68942 00:05:31.745 06:36:58 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 68942 ']' 00:05:31.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.745 06:36:58 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.745 06:36:58 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:31.745 06:36:58 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.745 06:36:58 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:31.745 06:36:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.745 [2024-08-14 06:36:58.926727] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:05:31.745 [2024-08-14 06:36:58.927383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68942 ] 00:05:32.013 [2024-08-14 06:36:59.074837] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.013 [2024-08-14 06:36:59.127270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.593 06:36:59 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:32.593 06:36:59 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:32.593 06:36:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:32.593 06:36:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:32.593 06:36:59 dpdk_mem_utility -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:32.593 06:36:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.593 { 00:05:32.593 "filename": "/tmp/spdk_mem_dump.txt" 00:05:32.593 } 00:05:32.593 06:36:59 dpdk_mem_utility -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:32.593 06:36:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:32.593 DPDK memory size 852.000000 MiB in 1 heap(s) 00:05:32.593 1 heaps totaling size 852.000000 MiB 00:05:32.593 size: 852.000000 MiB heap id: 0 00:05:32.593 end heaps---------- 00:05:32.593 9 mempools totaling size 634.625427 MiB 00:05:32.593 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:32.593 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:32.593 size: 84.521057 MiB name: bdev_io_68942 00:05:32.593 size: 51.011292 MiB name: evtpool_68942 00:05:32.593 size: 50.003479 MiB name: msgpool_68942 00:05:32.593 size: 36.509338 MiB name: fsdev_io_68942 00:05:32.593 size: 21.763794 MiB name: PDU_Pool 00:05:32.593 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:32.593 size: 0.026123 MiB name: Session_Pool 00:05:32.593 end mempools------- 00:05:32.593 6 memzones totaling size 4.142822 MiB 00:05:32.593 size: 1.000366 MiB name: RG_ring_0_68942 00:05:32.593 size: 1.000366 MiB name: RG_ring_1_68942 00:05:32.593 size: 1.000366 MiB name: RG_ring_4_68942 00:05:32.593 size: 1.000366 MiB name: RG_ring_5_68942 00:05:32.593 size: 0.125366 MiB name: RG_ring_2_68942 00:05:32.593 size: 0.015991 MiB name: RG_ring_3_68942 00:05:32.593 end memzones------- 00:05:32.593 06:36:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:32.855 heap id: 0 total size: 852.000000 MiB number of busy elements: 296 number of free elements: 16 00:05:32.855 list of free elements. size: 13.962952 MiB 00:05:32.855 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:32.855 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:32.855 element at address: 0x20001b400000 with size: 0.999878 MiB 00:05:32.855 element at address: 0x20001b600000 with size: 0.999878 MiB 00:05:32.855 element at address: 0x200034200000 with size: 0.994446 MiB 00:05:32.855 element at address: 0x200015e00000 with size: 0.978699 MiB 00:05:32.855 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:32.855 element at address: 0x20001b800000 with size: 0.936584 MiB 00:05:32.855 element at address: 0x200000200000 with size: 0.835022 MiB 00:05:32.855 element at address: 0x20001d000000 with size: 0.568970 MiB 00:05:32.855 element at address: 0x20000d800000 with size: 0.489624 MiB 00:05:32.855 element at address: 0x200003e00000 with size: 0.488831 MiB 00:05:32.855 element at address: 0x20001ba00000 with size: 0.485657 MiB 00:05:32.855 element at address: 0x200007000000 with size: 0.480469 MiB 00:05:32.855 element at address: 0x20002a400000 with size: 0.395752 MiB 00:05:32.855 element at address: 0x200003a00000 with size: 0.352844 MiB 00:05:32.855 list of standard malloc elements. size: 199.264771 MiB 00:05:32.855 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:32.855 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:32.855 element at address: 0x20001b4fff80 with size: 1.000122 MiB 00:05:32.855 element at address: 0x20001b6fff80 with size: 1.000122 MiB 00:05:32.855 element at address: 0x20001b8fff80 with size: 1.000122 MiB 00:05:32.855 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:32.855 element at address: 0x20001b8eff00 with size: 0.062622 MiB 00:05:32.855 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:32.855 element at address: 0x20001b8efdc0 with size: 0.000305 MiB 00:05:32.855 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:32.855 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a5a540 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a5ea00 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:32.855 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:32.856 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200015efa8c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001b8efc40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001b8efd00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001babc740 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d091a80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d091b40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d091c00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d091cc0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d091d80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d091e40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d091f00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d091fc0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092080 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092140 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092200 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d0922c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092380 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092440 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092500 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d0925c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092680 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092740 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092800 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d0928c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092980 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092a40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092b00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092bc0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092c80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092d40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092e00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092ec0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d092f80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093040 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093100 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d0931c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093280 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093340 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093400 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d0934c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093580 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093640 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093700 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d0937c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093880 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093940 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093a00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093ac0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093b80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093c40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093d00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093dc0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093e80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d093f40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094000 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d0940c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094180 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094240 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094300 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d0943c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094480 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094540 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094600 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d0946c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094780 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094840 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094900 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d0949c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094a80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094b40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094c00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094cc0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094d80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094e40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094f00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d094fc0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d095080 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d095140 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d095200 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d0952c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d095380 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001d095440 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a465500 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a4655c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46c1c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46c3c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46c480 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46c540 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46c600 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46c6c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46c780 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46c840 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46c900 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46c9c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46ca80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46cb40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46cc00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46ccc0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46cd80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46ce40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46cf00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46cfc0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46d080 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46d140 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46d200 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46d2c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46d380 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46d440 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46d500 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46d5c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46d680 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46d740 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46d800 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46d8c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46d980 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46da40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002a46db00 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46dbc0 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46dc80 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46dd40 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46de00 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46dec0 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46df80 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46e040 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46e100 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46e1c0 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46e280 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46e340 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46e400 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46e4c0 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46e580 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46e640 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46e700 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46e7c0 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46e880 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46e940 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46ea00 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46eac0 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46eb80 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46ec40 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46ed00 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46edc0 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46ee80 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46ef40 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46f000 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46f0c0 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46f180 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46f240 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46f300 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46f3c0 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46f480 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46f540 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46f600 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46f6c0 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46f780 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46f840 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46f900 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46f9c0 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46fa80 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46fb40 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46fc00 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46fcc0 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46fd80 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46fe40 with size: 0.000183 MiB 00:05:32.857 element at address: 0x20002a46ff00 with size: 0.000183 MiB 00:05:32.857 list of memzone associated elements. size: 638.772278 MiB 00:05:32.857 element at address: 0x20001d095500 with size: 211.416748 MiB 00:05:32.857 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:32.857 element at address: 0x20002a46ffc0 with size: 157.562561 MiB 00:05:32.857 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:32.857 element at address: 0x200015ffab80 with size: 84.020630 MiB 00:05:32.857 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68942_0 00:05:32.857 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:32.857 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68942_0 00:05:32.857 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:32.857 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68942_0 00:05:32.857 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:32.857 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_68942_0 00:05:32.857 element at address: 0x20001bbbe940 with size: 20.255554 MiB 00:05:32.857 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:32.857 element at address: 0x2000343feb40 with size: 18.005066 MiB 00:05:32.857 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:32.857 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:32.857 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68942 00:05:32.857 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:32.857 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68942 00:05:32.857 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:32.857 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68942 00:05:32.857 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:32.857 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:32.857 element at address: 0x20001babc800 with size: 1.008118 MiB 00:05:32.857 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:32.857 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:32.857 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:32.857 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:32.857 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:32.857 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:32.857 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68942 00:05:32.857 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:32.857 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68942 00:05:32.857 element at address: 0x200015efa980 with size: 1.000488 MiB 00:05:32.857 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68942 00:05:32.857 element at address: 0x2000342fe940 with size: 1.000488 MiB 00:05:32.857 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68942 00:05:32.857 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:32.857 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_68942 00:05:32.857 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:32.857 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68942 00:05:32.857 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:32.857 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:32.857 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:32.857 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:32.857 element at address: 0x20001ba7c540 with size: 0.250488 MiB 00:05:32.857 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:32.857 element at address: 0x200003a5eac0 with size: 0.125488 MiB 00:05:32.857 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68942 00:05:32.857 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:32.857 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:32.857 element at address: 0x20002a465680 with size: 0.023743 MiB 00:05:32.857 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:32.857 element at address: 0x200003a5a800 with size: 0.016113 MiB 00:05:32.857 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68942 00:05:32.857 element at address: 0x20002a46b7c0 with size: 0.002441 MiB 00:05:32.857 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:32.857 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:32.857 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68942 00:05:32.857 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:32.857 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_68942 00:05:32.857 element at address: 0x200003a5a600 with size: 0.000305 MiB 00:05:32.857 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68942 00:05:32.857 element at address: 0x20002a46c280 with size: 0.000305 MiB 00:05:32.857 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:32.857 06:36:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:32.857 06:36:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68942 00:05:32.857 06:36:59 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 68942 ']' 00:05:32.857 06:36:59 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 68942 00:05:32.857 06:36:59 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:32.857 06:36:59 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:32.857 06:36:59 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68942 00:05:32.857 killing process with pid 68942 00:05:32.857 06:36:59 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:32.857 06:36:59 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:32.857 06:36:59 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68942' 00:05:32.857 06:36:59 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 68942 00:05:32.857 06:36:59 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 68942 00:05:33.118 00:05:33.118 real 0m1.614s 00:05:33.118 user 0m1.623s 00:05:33.118 sys 0m0.464s 00:05:33.118 06:37:00 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.118 06:37:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:33.118 ************************************ 00:05:33.118 END TEST dpdk_mem_utility 00:05:33.118 ************************************ 00:05:33.377 06:37:00 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:33.377 06:37:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.377 06:37:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.377 06:37:00 -- common/autotest_common.sh@10 -- # set +x 00:05:33.377 ************************************ 00:05:33.377 START TEST event 00:05:33.377 ************************************ 00:05:33.377 06:37:00 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:33.377 * Looking for test storage... 00:05:33.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:33.377 06:37:00 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:33.377 06:37:00 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:33.377 06:37:00 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:33.377 06:37:00 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:33.377 06:37:00 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.377 06:37:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.377 ************************************ 00:05:33.377 START TEST event_perf 00:05:33.377 ************************************ 00:05:33.377 06:37:00 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:33.377 Running I/O for 1 seconds...[2024-08-14 06:37:00.564885] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:05:33.377 [2024-08-14 06:37:00.565095] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69020 ] 00:05:33.637 [2024-08-14 06:37:00.694440] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.637 [2024-08-14 06:37:00.747937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.637 [2024-08-14 06:37:00.748115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.637 Running I/O for 1 seconds...[2024-08-14 06:37:00.748277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.637 [2024-08-14 06:37:00.748156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.578 00:05:34.578 lcore 0: 198084 00:05:34.578 lcore 1: 198084 00:05:34.578 lcore 2: 198084 00:05:34.578 lcore 3: 198086 00:05:34.578 done. 00:05:34.838 00:05:34.838 real 0m1.320s 00:05:34.838 user 0m4.097s 00:05:34.838 sys 0m0.100s 00:05:34.838 06:37:01 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.838 06:37:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.838 ************************************ 00:05:34.838 END TEST event_perf 00:05:34.838 ************************************ 00:05:34.838 06:37:01 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:34.838 06:37:01 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:34.838 06:37:01 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.838 06:37:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.838 ************************************ 00:05:34.838 START TEST event_reactor 00:05:34.838 ************************************ 00:05:34.838 06:37:01 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:34.838 [2024-08-14 06:37:01.952005] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:05:34.838 [2024-08-14 06:37:01.952230] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69054 ] 00:05:34.838 [2024-08-14 06:37:02.082667] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.097 [2024-08-14 06:37:02.135117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.036 test_start 00:05:36.036 oneshot 00:05:36.036 tick 100 00:05:36.036 tick 100 00:05:36.036 tick 250 00:05:36.036 tick 100 00:05:36.036 tick 100 00:05:36.036 tick 100 00:05:36.036 tick 250 00:05:36.036 tick 500 00:05:36.036 tick 100 00:05:36.036 tick 100 00:05:36.036 tick 250 00:05:36.036 tick 100 00:05:36.036 tick 100 00:05:36.036 test_end 00:05:36.036 ************************************ 00:05:36.036 END TEST event_reactor 00:05:36.036 ************************************ 00:05:36.036 00:05:36.036 real 0m1.321s 00:05:36.036 user 0m1.131s 00:05:36.036 sys 0m0.082s 00:05:36.036 06:37:03 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.036 06:37:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:36.036 06:37:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:36.036 06:37:03 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:36.036 06:37:03 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.036 06:37:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.296 ************************************ 00:05:36.296 START TEST event_reactor_perf 00:05:36.296 ************************************ 00:05:36.296 06:37:03 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:36.296 [2024-08-14 06:37:03.327905] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:05:36.296 [2024-08-14 06:37:03.328163] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69095 ] 00:05:36.296 [2024-08-14 06:37:03.474661] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.296 [2024-08-14 06:37:03.527869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.676 test_start 00:05:37.676 test_end 00:05:37.676 Performance: 357532 events per second 00:05:37.676 00:05:37.676 real 0m1.325s 00:05:37.676 user 0m1.139s 00:05:37.676 sys 0m0.078s 00:05:37.676 ************************************ 00:05:37.676 END TEST event_reactor_perf 00:05:37.676 ************************************ 00:05:37.676 06:37:04 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.676 06:37:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.676 06:37:04 event -- event/event.sh@49 -- # uname -s 00:05:37.676 06:37:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:37.676 06:37:04 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:37.676 06:37:04 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.676 06:37:04 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.676 06:37:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.676 ************************************ 00:05:37.676 START TEST event_scheduler 00:05:37.676 ************************************ 00:05:37.676 06:37:04 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:37.676 * Looking for test storage... 00:05:37.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:37.676 06:37:04 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:37.676 06:37:04 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=69153 00:05:37.676 06:37:04 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:37.676 06:37:04 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.676 06:37:04 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 69153 00:05:37.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.676 06:37:04 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 69153 ']' 00:05:37.676 06:37:04 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.676 06:37:04 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:37.676 06:37:04 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.676 06:37:04 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:37.676 06:37:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.676 [2024-08-14 06:37:04.891653] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:05:37.676 [2024-08-14 06:37:04.891787] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69153 ] 00:05:37.936 [2024-08-14 06:37:05.039140] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:37.936 [2024-08-14 06:37:05.093401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.936 [2024-08-14 06:37:05.093572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.936 [2024-08-14 06:37:05.093769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.936 [2024-08-14 06:37:05.093789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.875 06:37:05 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:38.875 06:37:05 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:38.875 06:37:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:38.875 06:37:05 event.event_scheduler -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:38.875 06:37:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.875 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:38.875 POWER: Cannot set governor of lcore 0 to userspace 00:05:38.875 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:38.875 POWER: Cannot set governor of lcore 0 to performance 00:05:38.875 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:38.875 POWER: Cannot set governor of lcore 0 to userspace 00:05:38.875 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:38.875 POWER: Unable to set Power Management Environment for lcore 0 00:05:38.875 [2024-08-14 06:37:05.770195] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:38.875 [2024-08-14 06:37:05.770224] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:38.875 [2024-08-14 06:37:05.770246] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:38.875 [2024-08-14 06:37:05.770266] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:38.875 [2024-08-14 06:37:05.770293] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:38.875 [2024-08-14 06:37:05.770301] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:38.875 06:37:05 event.event_scheduler -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:38.875 06:37:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:38.875 06:37:05 event.event_scheduler -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:38.875 06:37:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.875 [2024-08-14 06:37:05.839598] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:38.875 06:37:05 event.event_scheduler -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:38.875 06:37:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:38.875 06:37:05 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.875 06:37:05 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.875 06:37:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.875 ************************************ 00:05:38.875 START TEST scheduler_create_thread 00:05:38.875 ************************************ 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.875 2 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.875 3 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.875 4 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.875 5 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.875 6 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.875 7 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.875 8 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.875 9 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.875 10 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:38.875 06:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.442 06:37:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:39.442 06:37:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:39.442 06:37:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:39.442 06:37:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.820 06:37:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:40.820 06:37:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:40.820 06:37:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:40.820 06:37:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:40.820 06:37:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.758 ************************************ 00:05:41.758 END TEST scheduler_create_thread 00:05:41.758 ************************************ 00:05:41.758 06:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:41.758 00:05:41.758 real 0m3.093s 00:05:41.758 user 0m0.016s 00:05:41.758 sys 0m0.009s 00:05:41.758 06:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.758 06:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.758 06:37:09 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:41.758 06:37:09 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 69153 00:05:41.758 06:37:09 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 69153 ']' 00:05:41.758 06:37:09 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 69153 00:05:41.758 06:37:09 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:42.016 06:37:09 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:42.016 06:37:09 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69153 00:05:42.016 killing process with pid 69153 00:05:42.016 06:37:09 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:42.016 06:37:09 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:42.016 06:37:09 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69153' 00:05:42.016 06:37:09 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 69153 00:05:42.016 06:37:09 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 69153 00:05:42.276 [2024-08-14 06:37:09.325760] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:42.536 00:05:42.536 real 0m4.898s 00:05:42.536 user 0m9.303s 00:05:42.536 sys 0m0.441s 00:05:42.536 06:37:09 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.536 06:37:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.536 ************************************ 00:05:42.536 END TEST event_scheduler 00:05:42.537 ************************************ 00:05:42.537 06:37:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:42.537 06:37:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:42.537 06:37:09 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.537 06:37:09 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.537 06:37:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.537 ************************************ 00:05:42.537 START TEST app_repeat 00:05:42.537 ************************************ 00:05:42.537 06:37:09 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:42.537 06:37:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.537 06:37:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.537 06:37:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:42.537 06:37:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.537 06:37:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:42.537 06:37:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:42.537 06:37:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:42.537 06:37:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=69259 00:05:42.537 06:37:09 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:42.537 06:37:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.537 06:37:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 69259' 00:05:42.537 Process app_repeat pid: 69259 00:05:42.537 06:37:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:42.537 06:37:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:42.537 spdk_app_start Round 0 00:05:42.537 06:37:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 69259 /var/tmp/spdk-nbd.sock 00:05:42.537 06:37:09 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 69259 ']' 00:05:42.537 06:37:09 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.537 06:37:09 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:42.537 06:37:09 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.537 06:37:09 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:42.537 06:37:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.537 [2024-08-14 06:37:09.713342] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:05:42.537 [2024-08-14 06:37:09.713548] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69259 ] 00:05:42.796 [2024-08-14 06:37:09.860053] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.796 [2024-08-14 06:37:09.907576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.796 [2024-08-14 06:37:09.907669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.365 06:37:10 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:43.365 06:37:10 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:43.365 06:37:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.624 Malloc0 00:05:43.624 06:37:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.884 Malloc1 00:05:43.884 06:37:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.884 06:37:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.884 06:37:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.884 06:37:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.884 06:37:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.884 06:37:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.884 06:37:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.884 06:37:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.884 06:37:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.884 06:37:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.884 06:37:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.884 06:37:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.884 06:37:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.884 06:37:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.884 06:37:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.884 06:37:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.144 /dev/nbd0 00:05:44.144 06:37:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.144 06:37:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.144 06:37:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:44.144 06:37:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:44.144 06:37:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:44.144 06:37:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:44.144 06:37:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:44.144 06:37:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:44.144 06:37:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:44.144 06:37:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:44.144 06:37:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.144 1+0 records in 00:05:44.144 1+0 records out 00:05:44.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045702 s, 9.0 MB/s 00:05:44.144 06:37:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.144 06:37:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:44.144 06:37:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.144 06:37:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:44.144 06:37:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:44.144 06:37:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.144 06:37:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.144 06:37:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.404 /dev/nbd1 00:05:44.404 06:37:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.404 06:37:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.404 06:37:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:44.404 06:37:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:44.404 06:37:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:44.404 06:37:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:44.404 06:37:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:44.404 06:37:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:44.404 06:37:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:44.404 06:37:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:44.404 06:37:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.404 1+0 records in 00:05:44.404 1+0 records out 00:05:44.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328083 s, 12.5 MB/s 00:05:44.404 06:37:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.404 06:37:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:44.404 06:37:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.404 06:37:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:44.404 06:37:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:44.404 06:37:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.404 06:37:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.404 06:37:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.404 06:37:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.404 06:37:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.663 06:37:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.663 { 00:05:44.663 "nbd_device": "/dev/nbd0", 00:05:44.663 "bdev_name": "Malloc0" 00:05:44.663 }, 00:05:44.663 { 00:05:44.663 "nbd_device": "/dev/nbd1", 00:05:44.663 "bdev_name": "Malloc1" 00:05:44.663 } 00:05:44.663 ]' 00:05:44.663 06:37:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.663 { 00:05:44.663 "nbd_device": "/dev/nbd0", 00:05:44.663 "bdev_name": "Malloc0" 00:05:44.663 }, 00:05:44.663 { 00:05:44.663 "nbd_device": "/dev/nbd1", 00:05:44.663 "bdev_name": "Malloc1" 00:05:44.663 } 00:05:44.663 ]' 00:05:44.663 06:37:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.663 06:37:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.663 /dev/nbd1' 00:05:44.663 06:37:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.663 06:37:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.663 /dev/nbd1' 00:05:44.663 06:37:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.663 06:37:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.663 06:37:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.663 06:37:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.663 06:37:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.663 06:37:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.663 06:37:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.664 256+0 records in 00:05:44.664 256+0 records out 00:05:44.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136845 s, 76.6 MB/s 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.664 256+0 records in 00:05:44.664 256+0 records out 00:05:44.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170022 s, 61.7 MB/s 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.664 256+0 records in 00:05:44.664 256+0 records out 00:05:44.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022972 s, 45.6 MB/s 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.664 06:37:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.923 06:37:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.923 06:37:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.923 06:37:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.923 06:37:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.923 06:37:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.923 06:37:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.923 06:37:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.923 06:37:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.923 06:37:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.923 06:37:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.183 06:37:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.183 06:37:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.183 06:37:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.183 06:37:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.183 06:37:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.183 06:37:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.183 06:37:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.183 06:37:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.183 06:37:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.183 06:37:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.183 06:37:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.443 06:37:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.443 06:37:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.443 06:37:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.443 06:37:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.443 06:37:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.443 06:37:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.443 06:37:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.443 06:37:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.443 06:37:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.443 06:37:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.443 06:37:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.443 06:37:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.443 06:37:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.703 06:37:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.962 [2024-08-14 06:37:13.031517] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.962 [2024-08-14 06:37:13.082531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.962 [2024-08-14 06:37:13.082537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.962 [2024-08-14 06:37:13.124278] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.962 [2024-08-14 06:37:13.124341] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.260 spdk_app_start Round 1 00:05:49.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.260 06:37:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:49.260 06:37:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:49.260 06:37:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 69259 /var/tmp/spdk-nbd.sock 00:05:49.260 06:37:15 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 69259 ']' 00:05:49.260 06:37:15 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.260 06:37:15 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:49.260 06:37:15 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.260 06:37:15 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:49.260 06:37:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.260 06:37:16 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:49.260 06:37:16 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:49.260 06:37:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.260 Malloc0 00:05:49.260 06:37:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.260 Malloc1 00:05:49.260 06:37:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.260 06:37:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.260 06:37:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.260 06:37:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.260 06:37:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.260 06:37:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.260 06:37:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.260 06:37:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.260 06:37:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.260 06:37:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.260 06:37:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.519 06:37:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.519 06:37:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.519 06:37:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.519 06:37:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.519 06:37:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.519 /dev/nbd0 00:05:49.519 06:37:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.519 06:37:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.519 06:37:16 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:49.519 06:37:16 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:49.519 06:37:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:49.519 06:37:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:49.519 06:37:16 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:49.519 06:37:16 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:49.519 06:37:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:49.519 06:37:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:49.519 06:37:16 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.519 1+0 records in 00:05:49.519 1+0 records out 00:05:49.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419617 s, 9.8 MB/s 00:05:49.519 06:37:16 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.519 06:37:16 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:49.519 06:37:16 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.778 06:37:16 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:49.778 06:37:16 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:49.778 06:37:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.778 06:37:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.778 06:37:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.778 /dev/nbd1 00:05:49.778 06:37:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.778 06:37:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.778 06:37:16 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:49.779 06:37:16 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:49.779 06:37:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:49.779 06:37:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:49.779 06:37:16 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:49.779 06:37:17 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:49.779 06:37:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:49.779 06:37:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:49.779 06:37:17 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.779 1+0 records in 00:05:49.779 1+0 records out 00:05:49.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335354 s, 12.2 MB/s 00:05:49.779 06:37:17 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.779 06:37:17 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:49.779 06:37:17 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.779 06:37:17 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:49.779 06:37:17 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:49.779 06:37:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.779 06:37:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.779 06:37:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.779 06:37:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.779 06:37:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.038 { 00:05:50.038 "nbd_device": "/dev/nbd0", 00:05:50.038 "bdev_name": "Malloc0" 00:05:50.038 }, 00:05:50.038 { 00:05:50.038 "nbd_device": "/dev/nbd1", 00:05:50.038 "bdev_name": "Malloc1" 00:05:50.038 } 00:05:50.038 ]' 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.038 { 00:05:50.038 "nbd_device": "/dev/nbd0", 00:05:50.038 "bdev_name": "Malloc0" 00:05:50.038 }, 00:05:50.038 { 00:05:50.038 "nbd_device": "/dev/nbd1", 00:05:50.038 "bdev_name": "Malloc1" 00:05:50.038 } 00:05:50.038 ]' 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.038 /dev/nbd1' 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.038 /dev/nbd1' 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.038 06:37:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.298 256+0 records in 00:05:50.298 256+0 records out 00:05:50.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142605 s, 73.5 MB/s 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.298 256+0 records in 00:05:50.298 256+0 records out 00:05:50.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246804 s, 42.5 MB/s 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.298 256+0 records in 00:05:50.298 256+0 records out 00:05:50.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021681 s, 48.4 MB/s 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.298 06:37:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.559 06:37:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.819 06:37:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.819 06:37:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.819 06:37:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.079 06:37:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.079 06:37:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.079 06:37:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.079 06:37:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.079 06:37:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.079 06:37:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.079 06:37:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.079 06:37:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.079 06:37:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.079 06:37:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.338 06:37:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.338 [2024-08-14 06:37:18.506577] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.338 [2024-08-14 06:37:18.551759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.338 [2024-08-14 06:37:18.551784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.598 [2024-08-14 06:37:18.594494] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.598 [2024-08-14 06:37:18.594553] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.138 spdk_app_start Round 2 00:05:54.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.138 06:37:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.138 06:37:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:54.138 06:37:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 69259 /var/tmp/spdk-nbd.sock 00:05:54.138 06:37:21 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 69259 ']' 00:05:54.138 06:37:21 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.138 06:37:21 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.138 06:37:21 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.138 06:37:21 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.138 06:37:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.398 06:37:21 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:54.398 06:37:21 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:54.398 06:37:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.658 Malloc0 00:05:54.658 06:37:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.919 Malloc1 00:05:54.919 06:37:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.919 06:37:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.919 06:37:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.919 06:37:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.919 06:37:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.919 06:37:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.919 06:37:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.919 06:37:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.919 06:37:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.919 06:37:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.919 06:37:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.919 06:37:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.919 06:37:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.919 06:37:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.919 06:37:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.919 06:37:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.179 /dev/nbd0 00:05:55.179 06:37:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.179 06:37:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.179 06:37:22 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:55.179 06:37:22 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:55.179 06:37:22 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:55.179 06:37:22 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:55.179 06:37:22 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:55.179 06:37:22 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:55.179 06:37:22 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:55.179 06:37:22 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:55.179 06:37:22 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.179 1+0 records in 00:05:55.179 1+0 records out 00:05:55.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00372025 s, 1.1 MB/s 00:05:55.179 06:37:22 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.179 06:37:22 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:55.179 06:37:22 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.179 06:37:22 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:55.179 06:37:22 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:55.179 06:37:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.179 06:37:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.179 06:37:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.438 /dev/nbd1 00:05:55.438 06:37:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.438 06:37:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.438 06:37:22 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:55.438 06:37:22 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:55.438 06:37:22 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:55.438 06:37:22 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:55.438 06:37:22 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:55.438 06:37:22 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:55.438 06:37:22 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:55.438 06:37:22 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:55.438 06:37:22 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.438 1+0 records in 00:05:55.438 1+0 records out 00:05:55.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248533 s, 16.5 MB/s 00:05:55.438 06:37:22 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.438 06:37:22 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:55.438 06:37:22 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.438 06:37:22 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:55.438 06:37:22 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:55.438 06:37:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.438 06:37:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.439 06:37:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.439 06:37:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.439 06:37:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.698 06:37:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.698 { 00:05:55.698 "nbd_device": "/dev/nbd0", 00:05:55.698 "bdev_name": "Malloc0" 00:05:55.698 }, 00:05:55.698 { 00:05:55.698 "nbd_device": "/dev/nbd1", 00:05:55.698 "bdev_name": "Malloc1" 00:05:55.698 } 00:05:55.698 ]' 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.699 { 00:05:55.699 "nbd_device": "/dev/nbd0", 00:05:55.699 "bdev_name": "Malloc0" 00:05:55.699 }, 00:05:55.699 { 00:05:55.699 "nbd_device": "/dev/nbd1", 00:05:55.699 "bdev_name": "Malloc1" 00:05:55.699 } 00:05:55.699 ]' 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.699 /dev/nbd1' 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.699 /dev/nbd1' 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.699 256+0 records in 00:05:55.699 256+0 records out 00:05:55.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00482679 s, 217 MB/s 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.699 256+0 records in 00:05:55.699 256+0 records out 00:05:55.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254884 s, 41.1 MB/s 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.699 256+0 records in 00:05:55.699 256+0 records out 00:05:55.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227993 s, 46.0 MB/s 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.699 06:37:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.959 06:37:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.959 06:37:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.959 06:37:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.959 06:37:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.959 06:37:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.959 06:37:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.959 06:37:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.959 06:37:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.959 06:37:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.959 06:37:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.219 06:37:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.219 06:37:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.219 06:37:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.219 06:37:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.219 06:37:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.219 06:37:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.219 06:37:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.219 06:37:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.219 06:37:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.219 06:37:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.219 06:37:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.479 06:37:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.479 06:37:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.480 06:37:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.480 06:37:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.480 06:37:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.480 06:37:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.480 06:37:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.480 06:37:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.480 06:37:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.480 06:37:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.480 06:37:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.480 06:37:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.480 06:37:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.739 06:37:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.999 [2024-08-14 06:37:23.992452] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.999 [2024-08-14 06:37:24.045061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.999 [2024-08-14 06:37:24.045068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.999 [2024-08-14 06:37:24.087597] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.999 [2024-08-14 06:37:24.087676] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.294 06:37:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 69259 /var/tmp/spdk-nbd.sock 00:06:00.294 06:37:26 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 69259 ']' 00:06:00.294 06:37:26 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.294 06:37:26 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:00.294 06:37:26 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.294 06:37:26 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:00.294 06:37:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 06:37:27 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:00.294 06:37:27 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:00.294 06:37:27 event.app_repeat -- event/event.sh@39 -- # killprocess 69259 00:06:00.294 06:37:27 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 69259 ']' 00:06:00.294 06:37:27 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 69259 00:06:00.294 06:37:27 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:00.294 06:37:27 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:00.294 06:37:27 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69259 00:06:00.294 killing process with pid 69259 00:06:00.294 06:37:27 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:00.294 06:37:27 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:00.294 06:37:27 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69259' 00:06:00.294 06:37:27 event.app_repeat -- common/autotest_common.sh@965 -- # kill 69259 00:06:00.294 06:37:27 event.app_repeat -- common/autotest_common.sh@970 -- # wait 69259 00:06:00.294 spdk_app_start is called in Round 0. 00:06:00.294 Shutdown signal received, stop current app iteration 00:06:00.294 Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 reinitialization... 00:06:00.294 spdk_app_start is called in Round 1. 00:06:00.294 Shutdown signal received, stop current app iteration 00:06:00.294 Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 reinitialization... 00:06:00.294 spdk_app_start is called in Round 2. 00:06:00.294 Shutdown signal received, stop current app iteration 00:06:00.294 Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 reinitialization... 00:06:00.294 spdk_app_start is called in Round 3. 00:06:00.294 Shutdown signal received, stop current app iteration 00:06:00.294 06:37:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:00.294 06:37:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:00.294 00:06:00.294 real 0m17.620s 00:06:00.294 user 0m38.818s 00:06:00.294 sys 0m2.776s 00:06:00.294 06:37:27 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.294 06:37:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 ************************************ 00:06:00.294 END TEST app_repeat 00:06:00.294 ************************************ 00:06:00.294 06:37:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:00.294 06:37:27 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:00.294 06:37:27 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.294 06:37:27 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.294 06:37:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 ************************************ 00:06:00.294 START TEST cpu_locks 00:06:00.294 ************************************ 00:06:00.294 06:37:27 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:00.294 * Looking for test storage... 00:06:00.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:00.294 06:37:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:00.294 06:37:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:00.294 06:37:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:00.294 06:37:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:00.294 06:37:27 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.294 06:37:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.294 06:37:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 ************************************ 00:06:00.294 START TEST default_locks 00:06:00.294 ************************************ 00:06:00.294 06:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:00.294 06:37:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69671 00:06:00.294 06:37:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 69671 00:06:00.294 06:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 69671 ']' 00:06:00.294 06:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.294 06:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:00.294 06:37:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.294 06:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.294 06:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:00.294 06:37:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 [2024-08-14 06:37:27.542310] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:00.294 [2024-08-14 06:37:27.542860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69671 ] 00:06:00.554 [2024-08-14 06:37:27.689603] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.554 [2024-08-14 06:37:27.738991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 69671 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 69671 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 69671 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 69671 ']' 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 69671 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69671 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:01.493 killing process with pid 69671 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69671' 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 69671 00:06:01.493 06:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 69671 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69671 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@646 -- # local es=0 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # valid_exec_arg waitforlisten 69671 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # local arg=waitforlisten 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # type -t waitforlisten 00:06:02.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # waitforlisten 69671 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 69671 ']' 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.063 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (69671) - No such process 00:06:02.063 ERROR: process (pid: 69671) is no longer running 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # es=1 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:02.063 00:06:02.063 real 0m1.656s 00:06:02.063 user 0m1.640s 00:06:02.063 sys 0m0.550s 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.063 06:37:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.063 ************************************ 00:06:02.063 END TEST default_locks 00:06:02.063 ************************************ 00:06:02.063 06:37:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:02.063 06:37:29 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.063 06:37:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.063 06:37:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.063 ************************************ 00:06:02.063 START TEST default_locks_via_rpc 00:06:02.063 ************************************ 00:06:02.063 06:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:02.063 06:37:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69724 00:06:02.063 06:37:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.063 06:37:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 69724 00:06:02.063 06:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 69724 ']' 00:06:02.063 06:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.063 06:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:02.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.063 06:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.063 06:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:02.063 06:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.063 [2024-08-14 06:37:29.263430] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:02.063 [2024-08-14 06:37:29.263572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69724 ] 00:06:02.322 [2024-08-14 06:37:29.409889] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.322 [2024-08-14 06:37:29.460565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 69724 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 69724 00:06:02.887 06:37:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.455 06:37:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 69724 00:06:03.455 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 69724 ']' 00:06:03.455 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 69724 00:06:03.455 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:03.455 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:03.455 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69724 00:06:03.455 killing process with pid 69724 00:06:03.455 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:03.455 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:03.455 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69724' 00:06:03.455 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 69724 00:06:03.455 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 69724 00:06:03.714 00:06:03.714 real 0m1.740s 00:06:03.714 user 0m1.716s 00:06:03.714 sys 0m0.593s 00:06:03.714 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.714 06:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.714 ************************************ 00:06:03.714 END TEST default_locks_via_rpc 00:06:03.714 ************************************ 00:06:03.714 06:37:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:03.714 06:37:30 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.714 06:37:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.714 06:37:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.973 ************************************ 00:06:03.973 START TEST non_locking_app_on_locked_coremask 00:06:03.973 ************************************ 00:06:03.973 06:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:03.973 06:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69776 00:06:03.973 06:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.973 06:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 69776 /var/tmp/spdk.sock 00:06:03.973 06:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69776 ']' 00:06:03.973 06:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.973 06:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:03.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.973 06:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.973 06:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:03.973 06:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.973 [2024-08-14 06:37:31.065294] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:03.973 [2024-08-14 06:37:31.065414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69776 ] 00:06:03.973 [2024-08-14 06:37:31.212388] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.232 [2024-08-14 06:37:31.257469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.800 06:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:04.800 06:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:04.800 06:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:04.800 06:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69786 00:06:04.800 06:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 69786 /var/tmp/spdk2.sock 00:06:04.800 06:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69786 ']' 00:06:04.800 06:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.800 06:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.800 06:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.800 06:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.800 06:37:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.800 [2024-08-14 06:37:31.956723] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:04.800 [2024-08-14 06:37:31.956843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69786 ] 00:06:05.060 [2024-08-14 06:37:32.094124] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.060 [2024-08-14 06:37:32.094187] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.060 [2024-08-14 06:37:32.190426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.629 06:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.629 06:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:05.629 06:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 69776 00:06:05.629 06:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 69776 00:06:05.629 06:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.197 06:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 69776 00:06:06.197 06:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 69776 ']' 00:06:06.197 06:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 69776 00:06:06.197 06:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:06.197 06:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:06.197 06:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69776 00:06:06.197 06:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:06.197 06:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:06.197 killing process with pid 69776 00:06:06.197 06:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69776' 00:06:06.197 06:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 69776 00:06:06.197 06:37:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 69776 00:06:07.134 06:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 69786 00:06:07.134 06:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 69786 ']' 00:06:07.134 06:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 69786 00:06:07.134 06:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:07.134 06:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:07.134 06:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69786 00:06:07.134 06:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:07.134 06:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:07.134 killing process with pid 69786 00:06:07.134 06:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69786' 00:06:07.134 06:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 69786 00:06:07.134 06:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 69786 00:06:07.394 00:06:07.394 real 0m3.497s 00:06:07.394 user 0m3.656s 00:06:07.394 sys 0m1.032s 00:06:07.394 06:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.394 06:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.394 ************************************ 00:06:07.394 END TEST non_locking_app_on_locked_coremask 00:06:07.394 ************************************ 00:06:07.394 06:37:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:07.394 06:37:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.394 06:37:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.394 06:37:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.394 ************************************ 00:06:07.394 START TEST locking_app_on_unlocked_coremask 00:06:07.394 ************************************ 00:06:07.394 06:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:07.394 06:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69850 00:06:07.394 06:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 69850 /var/tmp/spdk.sock 00:06:07.394 06:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:07.394 06:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69850 ']' 00:06:07.394 06:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.394 06:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.394 06:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.394 06:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.394 06:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.394 [2024-08-14 06:37:34.621641] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:07.394 [2024-08-14 06:37:34.621781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69850 ] 00:06:07.653 [2024-08-14 06:37:34.764983] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.653 [2024-08-14 06:37:34.765047] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.653 [2024-08-14 06:37:34.812523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.225 06:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.225 06:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:08.225 06:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:08.225 06:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69866 00:06:08.225 06:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 69866 /var/tmp/spdk2.sock 00:06:08.225 06:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69866 ']' 00:06:08.225 06:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.225 06:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:08.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.225 06:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.225 06:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:08.225 06:37:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.485 [2024-08-14 06:37:35.502783] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:08.485 [2024-08-14 06:37:35.502904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69866 ] 00:06:08.485 [2024-08-14 06:37:35.638853] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.485 [2024-08-14 06:37:35.725826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.424 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.424 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:09.424 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 69866 00:06:09.424 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.424 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 69866 00:06:09.683 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 69850 00:06:09.683 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 69850 ']' 00:06:09.683 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 69850 00:06:09.683 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:09.683 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:09.683 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69850 00:06:09.683 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:09.683 killing process with pid 69850 00:06:09.683 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:09.683 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69850' 00:06:09.683 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 69850 00:06:09.683 06:37:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 69850 00:06:10.623 06:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 69866 00:06:10.623 06:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 69866 ']' 00:06:10.623 06:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 69866 00:06:10.623 06:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:10.623 06:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:10.623 06:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69866 00:06:10.623 killing process with pid 69866 00:06:10.623 06:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:10.623 06:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:10.623 06:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69866' 00:06:10.623 06:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 69866 00:06:10.623 06:37:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 69866 00:06:10.882 00:06:10.883 real 0m3.485s 00:06:10.883 user 0m3.642s 00:06:10.883 sys 0m1.012s 00:06:10.883 06:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.883 ************************************ 00:06:10.883 END TEST locking_app_on_unlocked_coremask 00:06:10.883 ************************************ 00:06:10.883 06:37:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.883 06:37:38 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:10.883 06:37:38 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.883 06:37:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.883 06:37:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.883 ************************************ 00:06:10.883 START TEST locking_app_on_locked_coremask 00:06:10.883 ************************************ 00:06:10.883 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:10.883 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.883 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69928 00:06:10.883 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 69928 /var/tmp/spdk.sock 00:06:10.883 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69928 ']' 00:06:10.883 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.883 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.883 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.883 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.883 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.142 [2024-08-14 06:37:38.165210] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:11.142 [2024-08-14 06:37:38.165370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69928 ] 00:06:11.142 [2024-08-14 06:37:38.301746] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.142 [2024-08-14 06:37:38.345857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.080 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69940 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69940 /var/tmp/spdk2.sock 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@646 -- # local es=0 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # valid_exec_arg waitforlisten 69940 /var/tmp/spdk2.sock 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # local arg=waitforlisten 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # type -t waitforlisten 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # waitforlisten 69940 /var/tmp/spdk2.sock 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69940 ']' 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.081 06:37:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.081 [2024-08-14 06:37:39.078030] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:12.081 [2024-08-14 06:37:39.078147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69940 ] 00:06:12.081 [2024-08-14 06:37:39.213618] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69928 has claimed it. 00:06:12.081 [2024-08-14 06:37:39.213677] app.c: 903:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.651 ERROR: process (pid: 69940) is no longer running 00:06:12.651 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (69940) - No such process 00:06:12.651 06:37:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.651 06:37:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:12.651 06:37:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # es=1 00:06:12.651 06:37:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:12.651 06:37:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:12.651 06:37:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:12.651 06:37:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 69928 00:06:12.651 06:37:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 69928 00:06:12.651 06:37:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.933 06:37:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 69928 00:06:12.933 06:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 69928 ']' 00:06:12.933 06:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 69928 00:06:12.933 06:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:12.933 06:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:12.933 06:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69928 00:06:12.933 06:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:12.933 06:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:12.933 killing process with pid 69928 00:06:12.933 06:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69928' 00:06:12.933 06:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 69928 00:06:12.933 06:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 69928 00:06:13.502 00:06:13.502 real 0m2.401s 00:06:13.502 user 0m2.604s 00:06:13.502 sys 0m0.677s 00:06:13.502 06:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.502 06:37:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.502 ************************************ 00:06:13.502 END TEST locking_app_on_locked_coremask 00:06:13.502 ************************************ 00:06:13.502 06:37:40 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:13.502 06:37:40 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.502 06:37:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.502 06:37:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.502 ************************************ 00:06:13.502 START TEST locking_overlapped_coremask 00:06:13.502 ************************************ 00:06:13.503 06:37:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:13.503 06:37:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69993 00:06:13.503 06:37:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:13.503 06:37:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 69993 /var/tmp/spdk.sock 00:06:13.503 06:37:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 69993 ']' 00:06:13.503 06:37:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.503 06:37:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.503 06:37:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.503 06:37:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.503 06:37:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.503 [2024-08-14 06:37:40.626087] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:13.503 [2024-08-14 06:37:40.626219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69993 ] 00:06:13.762 [2024-08-14 06:37:40.768902] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.763 [2024-08-14 06:37:40.819993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.763 [2024-08-14 06:37:40.820029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.763 [2024-08-14 06:37:40.820148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70011 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70011 /var/tmp/spdk2.sock 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@646 -- # local es=0 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # valid_exec_arg waitforlisten 70011 /var/tmp/spdk2.sock 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # local arg=waitforlisten 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # type -t waitforlisten 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # waitforlisten 70011 /var/tmp/spdk2.sock 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 70011 ']' 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:14.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:14.339 06:37:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.339 [2024-08-14 06:37:41.559149] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:14.339 [2024-08-14 06:37:41.559659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70011 ] 00:06:14.612 [2024-08-14 06:37:41.701322] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69993 has claimed it. 00:06:14.612 [2024-08-14 06:37:41.701391] app.c: 903:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.183 ERROR: process (pid: 70011) is no longer running 00:06:15.183 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (70011) - No such process 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # es=1 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 69993 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 69993 ']' 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 69993 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69993 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69993' 00:06:15.183 killing process with pid 69993 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 69993 00:06:15.183 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 69993 00:06:15.443 00:06:15.443 real 0m2.137s 00:06:15.443 user 0m5.783s 00:06:15.443 sys 0m0.500s 00:06:15.443 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.443 06:37:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.443 ************************************ 00:06:15.443 END TEST locking_overlapped_coremask 00:06:15.443 ************************************ 00:06:15.703 06:37:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:15.703 06:37:42 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.703 06:37:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.703 06:37:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.703 ************************************ 00:06:15.703 START TEST locking_overlapped_coremask_via_rpc 00:06:15.703 ************************************ 00:06:15.703 06:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:15.703 06:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70053 00:06:15.703 06:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:15.703 06:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 70053 /var/tmp/spdk.sock 00:06:15.703 06:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 70053 ']' 00:06:15.703 06:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.703 06:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.703 06:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.703 06:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.703 06:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.703 [2024-08-14 06:37:42.829138] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:15.703 [2024-08-14 06:37:42.829286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70053 ] 00:06:15.963 [2024-08-14 06:37:42.975452] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.963 [2024-08-14 06:37:42.975541] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.963 [2024-08-14 06:37:43.029677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.963 [2024-08-14 06:37:43.029769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.963 [2024-08-14 06:37:43.029873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.533 06:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:16.533 06:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:16.533 06:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70071 00:06:16.533 06:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 70071 /var/tmp/spdk2.sock 00:06:16.533 06:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:16.533 06:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 70071 ']' 00:06:16.533 06:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.533 06:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:16.533 06:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.533 06:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:16.533 06:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.533 [2024-08-14 06:37:43.778344] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:16.533 [2024-08-14 06:37:43.778491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70071 ] 00:06:16.792 [2024-08-14 06:37:43.921042] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.792 [2024-08-14 06:37:43.921125] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.792 [2024-08-14 06:37:44.022997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.792 [2024-08-14 06:37:44.026391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.792 [2024-08-14 06:37:44.026508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@646 -- # local es=0 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.731 [2024-08-14 06:37:44.643387] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70053 has claimed it. 00:06:17.731 request: 00:06:17.731 { 00:06:17.731 "method": "framework_enable_cpumask_locks", 00:06:17.731 "req_id": 1 00:06:17.731 } 00:06:17.731 Got JSON-RPC error response 00:06:17.731 response: 00:06:17.731 { 00:06:17.731 "code": -32603, 00:06:17.731 "message": "Failed to claim CPU core: 2" 00:06:17.731 } 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # es=1 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 70053 /var/tmp/spdk.sock 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 70053 ']' 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.731 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.732 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.732 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.732 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:17.732 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:17.732 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 70071 /var/tmp/spdk2.sock 00:06:17.732 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 70071 ']' 00:06:17.732 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.732 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.732 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.732 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.732 06:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.991 06:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:17.991 06:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:17.991 06:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:17.991 06:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.991 06:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.991 06:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.991 00:06:17.991 real 0m2.365s 00:06:17.991 user 0m1.116s 00:06:17.991 sys 0m0.176s 00:06:17.991 06:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.991 06:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.991 ************************************ 00:06:17.991 END TEST locking_overlapped_coremask_via_rpc 00:06:17.991 ************************************ 00:06:17.991 06:37:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:17.991 06:37:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70053 ]] 00:06:17.991 06:37:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70053 00:06:17.991 06:37:45 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 70053 ']' 00:06:17.991 06:37:45 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 70053 00:06:17.991 06:37:45 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:17.991 06:37:45 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:17.991 06:37:45 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70053 00:06:17.991 killing process with pid 70053 00:06:17.991 06:37:45 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:17.991 06:37:45 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:17.991 06:37:45 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70053' 00:06:17.991 06:37:45 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 70053 00:06:17.991 06:37:45 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 70053 00:06:18.560 06:37:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70071 ]] 00:06:18.560 06:37:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70071 00:06:18.560 06:37:45 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 70071 ']' 00:06:18.560 06:37:45 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 70071 00:06:18.560 06:37:45 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:18.560 06:37:45 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:18.560 06:37:45 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70071 00:06:18.560 killing process with pid 70071 00:06:18.560 06:37:45 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:18.560 06:37:45 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:18.560 06:37:45 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70071' 00:06:18.560 06:37:45 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 70071 00:06:18.560 06:37:45 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 70071 00:06:18.819 06:37:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.819 06:37:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:18.819 06:37:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70053 ]] 00:06:18.819 06:37:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70053 00:06:18.819 06:37:46 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 70053 ']' 00:06:18.819 06:37:46 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 70053 00:06:18.819 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (70053) - No such process 00:06:18.819 Process with pid 70053 is not found 00:06:18.819 06:37:46 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 70053 is not found' 00:06:18.819 06:37:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70071 ]] 00:06:18.819 06:37:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70071 00:06:18.819 06:37:46 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 70071 ']' 00:06:18.819 06:37:46 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 70071 00:06:18.819 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (70071) - No such process 00:06:18.819 Process with pid 70071 is not found 00:06:18.819 06:37:46 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 70071 is not found' 00:06:18.819 06:37:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.819 ************************************ 00:06:18.819 END TEST cpu_locks 00:06:18.819 ************************************ 00:06:18.819 00:06:18.819 real 0m18.688s 00:06:18.819 user 0m31.667s 00:06:18.819 sys 0m5.584s 00:06:18.819 06:37:46 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.819 06:37:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.079 ************************************ 00:06:19.079 END TEST event 00:06:19.079 ************************************ 00:06:19.079 00:06:19.079 real 0m45.690s 00:06:19.079 user 1m26.320s 00:06:19.079 sys 0m9.422s 00:06:19.079 06:37:46 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.079 06:37:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.079 06:37:46 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:19.079 06:37:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:19.079 06:37:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.079 06:37:46 -- common/autotest_common.sh@10 -- # set +x 00:06:19.079 ************************************ 00:06:19.079 START TEST thread 00:06:19.079 ************************************ 00:06:19.079 06:37:46 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:19.079 * Looking for test storage... 00:06:19.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:19.079 06:37:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.079 06:37:46 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:19.079 06:37:46 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.079 06:37:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.079 ************************************ 00:06:19.079 START TEST thread_poller_perf 00:06:19.079 ************************************ 00:06:19.079 06:37:46 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.079 [2024-08-14 06:37:46.323287] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:19.079 [2024-08-14 06:37:46.323444] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70190 ] 00:06:19.338 [2024-08-14 06:37:46.470291] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.338 [2024-08-14 06:37:46.518356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.338 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:20.715 ====================================== 00:06:20.715 busy:2299449432 (cyc) 00:06:20.715 total_run_count: 384000 00:06:20.715 tsc_hz: 2290000000 (cyc) 00:06:20.715 ====================================== 00:06:20.715 poller_cost: 5988 (cyc), 2614 (nsec) 00:06:20.715 00:06:20.715 real 0m1.338s 00:06:20.715 user 0m1.144s 00:06:20.715 sys 0m0.087s 00:06:20.715 06:37:47 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.715 ************************************ 00:06:20.715 END TEST thread_poller_perf 00:06:20.715 ************************************ 00:06:20.715 06:37:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.715 06:37:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.715 06:37:47 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:20.715 06:37:47 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.715 06:37:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.715 ************************************ 00:06:20.715 START TEST thread_poller_perf 00:06:20.715 ************************************ 00:06:20.715 06:37:47 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.715 [2024-08-14 06:37:47.712158] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:20.715 [2024-08-14 06:37:47.712312] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70232 ] 00:06:20.715 [2024-08-14 06:37:47.856868] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.715 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:20.715 [2024-08-14 06:37:47.905117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.094 ====================================== 00:06:22.094 busy:2293617784 (cyc) 00:06:22.094 total_run_count: 5136000 00:06:22.094 tsc_hz: 2290000000 (cyc) 00:06:22.094 ====================================== 00:06:22.094 poller_cost: 446 (cyc), 194 (nsec) 00:06:22.094 00:06:22.094 real 0m1.324s 00:06:22.094 user 0m1.134s 00:06:22.094 sys 0m0.085s 00:06:22.094 06:37:48 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:22.094 06:37:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.094 ************************************ 00:06:22.094 END TEST thread_poller_perf 00:06:22.094 ************************************ 00:06:22.094 06:37:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:22.094 00:06:22.094 real 0m2.907s 00:06:22.094 user 0m2.364s 00:06:22.094 sys 0m0.341s 00:06:22.094 06:37:49 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:22.094 06:37:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.094 ************************************ 00:06:22.094 END TEST thread 00:06:22.094 ************************************ 00:06:22.094 06:37:49 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:22.094 06:37:49 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:22.094 06:37:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:22.094 06:37:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.094 06:37:49 -- common/autotest_common.sh@10 -- # set +x 00:06:22.094 ************************************ 00:06:22.094 START TEST app_cmdline 00:06:22.094 ************************************ 00:06:22.094 06:37:49 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:22.094 * Looking for test storage... 00:06:22.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:22.094 06:37:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:22.094 06:37:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=70302 00:06:22.094 06:37:49 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:22.094 06:37:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 70302 00:06:22.094 06:37:49 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 70302 ']' 00:06:22.094 06:37:49 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.094 06:37:49 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:22.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.094 06:37:49 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.094 06:37:49 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:22.094 06:37:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.094 [2024-08-14 06:37:49.317970] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:22.094 [2024-08-14 06:37:49.318097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70302 ] 00:06:22.354 [2024-08-14 06:37:49.462300] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.354 [2024-08-14 06:37:49.508235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.922 06:37:50 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.922 06:37:50 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:22.922 06:37:50 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:23.182 { 00:06:23.182 "version": "SPDK v24.09-pre git sha1 d47670264", 00:06:23.182 "fields": { 00:06:23.182 "major": 24, 00:06:23.182 "minor": 9, 00:06:23.182 "patch": 0, 00:06:23.182 "suffix": "-pre", 00:06:23.182 "commit": "d47670264" 00:06:23.182 } 00:06:23.182 } 00:06:23.182 06:37:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:23.182 06:37:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:23.182 06:37:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:23.182 06:37:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:23.182 06:37:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:23.182 06:37:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:23.182 06:37:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:23.182 06:37:50 app_cmdline -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:23.182 06:37:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.182 06:37:50 app_cmdline -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:23.182 06:37:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:23.182 06:37:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:23.182 06:37:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:23.182 06:37:50 app_cmdline -- common/autotest_common.sh@646 -- # local es=0 00:06:23.182 06:37:50 app_cmdline -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:23.182 06:37:50 app_cmdline -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:23.182 06:37:50 app_cmdline -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:23.182 06:37:50 app_cmdline -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:23.182 06:37:50 app_cmdline -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:23.182 06:37:50 app_cmdline -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:23.182 06:37:50 app_cmdline -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:23.182 06:37:50 app_cmdline -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:23.182 06:37:50 app_cmdline -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:23.182 06:37:50 app_cmdline -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:23.443 request: 00:06:23.443 { 00:06:23.443 "method": "env_dpdk_get_mem_stats", 00:06:23.443 "req_id": 1 00:06:23.443 } 00:06:23.443 Got JSON-RPC error response 00:06:23.443 response: 00:06:23.443 { 00:06:23.443 "code": -32601, 00:06:23.443 "message": "Method not found" 00:06:23.443 } 00:06:23.443 06:37:50 app_cmdline -- common/autotest_common.sh@649 -- # es=1 00:06:23.443 06:37:50 app_cmdline -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:23.443 06:37:50 app_cmdline -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:23.443 06:37:50 app_cmdline -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:23.443 06:37:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 70302 00:06:23.443 06:37:50 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 70302 ']' 00:06:23.443 06:37:50 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 70302 00:06:23.443 06:37:50 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:23.443 06:37:50 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:23.443 06:37:50 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70302 00:06:23.443 06:37:50 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:23.443 06:37:50 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:23.443 killing process with pid 70302 00:06:23.443 06:37:50 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70302' 00:06:23.443 06:37:50 app_cmdline -- common/autotest_common.sh@965 -- # kill 70302 00:06:23.443 06:37:50 app_cmdline -- common/autotest_common.sh@970 -- # wait 70302 00:06:24.011 00:06:24.011 real 0m1.879s 00:06:24.011 user 0m2.159s 00:06:24.011 sys 0m0.482s 00:06:24.011 06:37:50 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.011 06:37:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.011 ************************************ 00:06:24.011 END TEST app_cmdline 00:06:24.011 ************************************ 00:06:24.011 06:37:51 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:24.011 06:37:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.011 06:37:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.011 06:37:51 -- common/autotest_common.sh@10 -- # set +x 00:06:24.011 ************************************ 00:06:24.011 START TEST version 00:06:24.011 ************************************ 00:06:24.011 06:37:51 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:24.011 * Looking for test storage... 00:06:24.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:24.011 06:37:51 version -- app/version.sh@17 -- # get_header_version major 00:06:24.011 06:37:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.011 06:37:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.011 06:37:51 version -- app/version.sh@14 -- # cut -f2 00:06:24.011 06:37:51 version -- app/version.sh@17 -- # major=24 00:06:24.011 06:37:51 version -- app/version.sh@18 -- # get_header_version minor 00:06:24.011 06:37:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.011 06:37:51 version -- app/version.sh@14 -- # cut -f2 00:06:24.011 06:37:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.011 06:37:51 version -- app/version.sh@18 -- # minor=9 00:06:24.011 06:37:51 version -- app/version.sh@19 -- # get_header_version patch 00:06:24.011 06:37:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.011 06:37:51 version -- app/version.sh@14 -- # cut -f2 00:06:24.011 06:37:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.011 06:37:51 version -- app/version.sh@19 -- # patch=0 00:06:24.011 06:37:51 version -- app/version.sh@20 -- # get_header_version suffix 00:06:24.011 06:37:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.011 06:37:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.011 06:37:51 version -- app/version.sh@14 -- # cut -f2 00:06:24.011 06:37:51 version -- app/version.sh@20 -- # suffix=-pre 00:06:24.011 06:37:51 version -- app/version.sh@22 -- # version=24.9 00:06:24.011 06:37:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:24.011 06:37:51 version -- app/version.sh@28 -- # version=24.9rc0 00:06:24.011 06:37:51 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:24.011 06:37:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:24.272 06:37:51 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:24.272 06:37:51 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:24.272 00:06:24.272 real 0m0.212s 00:06:24.272 user 0m0.113s 00:06:24.272 sys 0m0.145s 00:06:24.272 06:37:51 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.272 06:37:51 version -- common/autotest_common.sh@10 -- # set +x 00:06:24.272 ************************************ 00:06:24.272 END TEST version 00:06:24.272 ************************************ 00:06:24.272 06:37:51 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:24.272 06:37:51 -- spdk/autotest.sh@201 -- # [[ 1 -eq 1 ]] 00:06:24.272 06:37:51 -- spdk/autotest.sh@202 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:24.272 06:37:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.272 06:37:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.272 06:37:51 -- common/autotest_common.sh@10 -- # set +x 00:06:24.272 ************************************ 00:06:24.272 START TEST bdev_raid 00:06:24.272 ************************************ 00:06:24.272 06:37:51 bdev_raid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:24.272 * Looking for test storage... 00:06:24.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:24.272 06:37:51 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:24.272 06:37:51 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:24.272 06:37:51 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:06:24.272 06:37:51 bdev_raid -- bdev/bdev_raid.sh@927 -- # mkdir -p /raidtest 00:06:24.272 06:37:51 bdev_raid -- bdev/bdev_raid.sh@928 -- # trap 'cleanup; exit 1' EXIT 00:06:24.272 06:37:51 bdev_raid -- bdev/bdev_raid.sh@930 -- # base_blocklen=512 00:06:24.272 06:37:51 bdev_raid -- bdev/bdev_raid.sh@932 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:24.272 06:37:51 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:24.272 06:37:51 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.272 06:37:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:24.272 ************************************ 00:06:24.272 START TEST raid0_resize_superblock_test 00:06:24.272 ************************************ 00:06:24.272 06:37:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1121 -- # raid_resize_superblock_test 0 00:06:24.272 06:37:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=0 00:06:24.272 06:37:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=70449 00:06:24.272 06:37:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:24.272 06:37:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 70449' 00:06:24.272 Process raid pid: 70449 00:06:24.272 06:37:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 70449 /var/tmp/spdk-raid.sock 00:06:24.272 06:37:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 70449 ']' 00:06:24.272 06:37:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:24.272 06:37:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:24.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:24.272 06:37:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:24.272 06:37:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:24.272 06:37:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.532 [2024-08-14 06:37:51.540989] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:24.532 [2024-08-14 06:37:51.541106] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.532 [2024-08-14 06:37:51.689000] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.532 [2024-08-14 06:37:51.733908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.532 [2024-08-14 06:37:51.776324] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:24.532 [2024-08-14 06:37:51.776368] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:25.470 06:37:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.470 06:37:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:06:25.471 06:37:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:06:25.471 malloc0 00:06:25.471 06:37:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:06:25.730 [2024-08-14 06:37:52.902113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:25.730 [2024-08-14 06:37:52.902219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:25.730 [2024-08-14 06:37:52.902246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:25.730 [2024-08-14 06:37:52.902257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:25.730 [2024-08-14 06:37:52.904461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:25.730 [2024-08-14 06:37:52.904502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:25.730 pt0 00:06:25.730 06:37:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:06:25.990 dbb8c652-d7b0-4a95-91e5-b601d4c0aa63 00:06:25.990 06:37:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:06:26.249 72cf4f54-ef28-4390-a44d-f1e2a5238426 00:06:26.249 06:37:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:06:26.508 f11ec012-2230-4613-a901-c92222f8dd33 00:06:26.508 06:37:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:06:26.508 06:37:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@884 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 0 -z 64 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:06:26.768 [2024-08-14 06:37:53.825381] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 72cf4f54-ef28-4390-a44d-f1e2a5238426 is claimed 00:06:26.768 [2024-08-14 06:37:53.825548] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev f11ec012-2230-4613-a901-c92222f8dd33 is claimed 00:06:26.768 [2024-08-14 06:37:53.825714] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:26.768 [2024-08-14 06:37:53.825727] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:26.768 [2024-08-14 06:37:53.826080] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:26.768 [2024-08-14 06:37:53.826290] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:26.768 [2024-08-14 06:37:53.826310] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:26.768 [2024-08-14 06:37:53.826481] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:26.768 06:37:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:06:26.768 06:37:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:27.027 06:37:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:06:27.027 06:37:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:06:27.027 06:37:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:27.291 06:37:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:06:27.291 06:37:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:27.291 06:37:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:27.291 06:37:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:27.291 06:37:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:27.291 [2024-08-14 06:37:54.464516] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:27.291 06:37:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:27.291 06:37:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:27.291 06:37:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 245760 == 245760 )) 00:06:27.291 06:37:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:06:27.561 [2024-08-14 06:37:54.664158] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:27.561 [2024-08-14 06:37:54.664324] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '72cf4f54-ef28-4390-a44d-f1e2a5238426' was resized: old size 131072, new size 204800 00:06:27.561 06:37:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:06:27.821 [2024-08-14 06:37:54.867753] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:27.821 [2024-08-14 06:37:54.867787] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f11ec012-2230-4613-a901-c92222f8dd33' was resized: old size 131072, new size 204800 00:06:27.821 [2024-08-14 06:37:54.867823] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:27.821 06:37:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:06:27.821 06:37:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:06:28.080 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:06:28.080 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:06:28.080 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:06:28.080 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:06:28.081 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:28.081 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:28.081 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:28.081 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # jq '.[].num_blocks' 00:06:28.340 [2024-08-14 06:37:55.530716] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:28.340 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:28.340 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:28.341 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # (( 393216 == 393216 )) 00:06:28.341 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:06:28.600 [2024-08-14 06:37:55.730190] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:28.600 [2024-08-14 06:37:55.730287] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:28.600 [2024-08-14 06:37:55.730301] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:28.600 [2024-08-14 06:37:55.730313] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:28.600 [2024-08-14 06:37:55.730466] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:28.600 [2024-08-14 06:37:55.730506] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:28.600 [2024-08-14 06:37:55.730521] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:28.600 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:06:28.860 [2024-08-14 06:37:55.933743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:28.860 [2024-08-14 06:37:55.933906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:28.860 [2024-08-14 06:37:55.933935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:28.860 [2024-08-14 06:37:55.933944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:28.860 [2024-08-14 06:37:55.936132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:28.860 [2024-08-14 06:37:55.936178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:28.860 pt0 00:06:28.860 [2024-08-14 06:37:55.937672] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 72cf4f54-ef28-4390-a44d-f1e2a5238426 00:06:28.860 [2024-08-14 06:37:55.937759] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 72cf4f54-ef28-4390-a44d-f1e2a5238426 is claimed 00:06:28.860 [2024-08-14 06:37:55.937851] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f11ec012-2230-4613-a901-c92222f8dd33 00:06:28.860 [2024-08-14 06:37:55.937866] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev f11ec012-2230-4613-a901-c92222f8dd33 is claimed 00:06:28.860 [2024-08-14 06:37:55.938013] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f11ec012-2230-4613-a901-c92222f8dd33 (2) smaller than existing raid bdev Raid (3) 00:06:28.860 [2024-08-14 06:37:55.938055] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:28.860 [2024-08-14 06:37:55.938066] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:28.860 [2024-08-14 06:37:55.938294] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:28.860 [2024-08-14 06:37:55.938450] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:28.860 [2024-08-14 06:37:55.938460] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:28.860 [2024-08-14 06:37:55.938568] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:28.860 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:28.860 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:28.860 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:28.860 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # jq '.[].num_blocks' 00:06:29.120 [2024-08-14 06:37:56.145681] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:29.120 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:29.120 06:37:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:29.120 06:37:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # (( 393216 == 393216 )) 00:06:29.120 06:37:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 70449 00:06:29.120 06:37:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 70449 ']' 00:06:29.120 06:37:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # kill -0 70449 00:06:29.120 06:37:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@951 -- # uname 00:06:29.120 06:37:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.121 06:37:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70449 00:06:29.121 06:37:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.121 06:37:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.121 06:37:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70449' 00:06:29.121 killing process with pid 70449 00:06:29.121 06:37:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@965 -- # kill 70449 00:06:29.121 [2024-08-14 06:37:56.209829] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:29.121 [2024-08-14 06:37:56.209983] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:29.121 06:37:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # wait 70449 00:06:29.121 [2024-08-14 06:37:56.210086] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:29.121 [2024-08-14 06:37:56.210136] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:29.121 [2024-08-14 06:37:56.370860] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:29.380 06:37:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:06:29.380 00:06:29.380 real 0m5.154s 00:06:29.380 user 0m8.349s 00:06:29.380 sys 0m0.903s 00:06:29.380 06:37:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.380 06:37:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.380 ************************************ 00:06:29.380 END TEST raid0_resize_superblock_test 00:06:29.380 ************************************ 00:06:29.640 06:37:56 bdev_raid -- bdev/bdev_raid.sh@933 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:29.640 06:37:56 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:29.640 06:37:56 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.640 06:37:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:29.640 ************************************ 00:06:29.640 START TEST raid1_resize_superblock_test 00:06:29.640 ************************************ 00:06:29.640 06:37:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1121 -- # raid_resize_superblock_test 1 00:06:29.640 06:37:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=1 00:06:29.640 06:37:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=70568 00:06:29.640 06:37:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:29.640 Process raid pid: 70568 00:06:29.640 06:37:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 70568' 00:06:29.640 06:37:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 70568 /var/tmp/spdk-raid.sock 00:06:29.640 06:37:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 70568 ']' 00:06:29.640 06:37:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:29.640 06:37:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:29.640 06:37:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:29.640 06:37:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.640 06:37:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.640 [2024-08-14 06:37:56.757689] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:29.640 [2024-08-14 06:37:56.757905] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.900 [2024-08-14 06:37:56.904349] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.900 [2024-08-14 06:37:56.950511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.900 [2024-08-14 06:37:56.993369] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.900 [2024-08-14 06:37:56.993401] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:30.468 06:37:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.468 06:37:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:06:30.468 06:37:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:06:30.728 malloc0 00:06:30.728 06:37:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:06:30.987 [2024-08-14 06:37:58.168956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:30.987 [2024-08-14 06:37:58.169041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:30.987 [2024-08-14 06:37:58.169075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:30.987 [2024-08-14 06:37:58.169091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:30.987 [2024-08-14 06:37:58.171397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:30.987 [2024-08-14 06:37:58.171440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:30.988 pt0 00:06:30.988 06:37:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:06:31.247 5adbaa34-c239-4789-b846-65ccec23da1d 00:06:31.507 06:37:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:06:31.507 0fe0ceb3-ea5b-4537-9be0-2a98b9ecc429 00:06:31.507 06:37:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:06:31.767 a0437a14-8e04-4edc-8d8d-e97be77f4d23 00:06:31.767 06:37:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:06:31.767 06:37:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 1 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:06:32.027 [2024-08-14 06:37:59.144107] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0fe0ceb3-ea5b-4537-9be0-2a98b9ecc429 is claimed 00:06:32.027 [2024-08-14 06:37:59.144274] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev a0437a14-8e04-4edc-8d8d-e97be77f4d23 is claimed 00:06:32.027 [2024-08-14 06:37:59.144438] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:32.027 [2024-08-14 06:37:59.144450] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:32.027 [2024-08-14 06:37:59.144789] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:32.027 [2024-08-14 06:37:59.144995] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:32.027 [2024-08-14 06:37:59.145017] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:32.027 [2024-08-14 06:37:59.145163] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.027 06:37:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:06:32.027 06:37:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:32.287 06:37:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:06:32.287 06:37:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:06:32.287 06:37:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:32.547 06:37:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:06:32.547 06:37:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:32.547 06:37:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:32.547 06:37:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:32.547 06:37:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:32.547 [2024-08-14 06:37:59.787302] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:32.807 06:37:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:32.807 06:37:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:32.807 06:37:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 122880 == 122880 )) 00:06:32.807 06:37:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:06:32.807 [2024-08-14 06:37:59.998895] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:32.807 [2024-08-14 06:37:59.998938] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0fe0ceb3-ea5b-4537-9be0-2a98b9ecc429' was resized: old size 131072, new size 204800 00:06:32.807 06:38:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:06:33.066 [2024-08-14 06:38:00.206507] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:33.066 [2024-08-14 06:38:00.206548] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a0437a14-8e04-4edc-8d8d-e97be77f4d23' was resized: old size 131072, new size 204800 00:06:33.066 [2024-08-14 06:38:00.206581] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:33.066 06:38:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:06:33.066 06:38:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:06:33.325 06:38:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:06:33.325 06:38:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:06:33.325 06:38:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:06:33.585 06:38:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:06:33.585 06:38:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:33.585 06:38:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:33.585 06:38:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:33.585 06:38:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # jq '.[].num_blocks' 00:06:33.585 [2024-08-14 06:38:00.829475] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:33.844 06:38:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:33.844 06:38:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:33.844 06:38:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # (( 196608 == 196608 )) 00:06:33.844 06:38:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:06:33.844 [2024-08-14 06:38:01.028947] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:33.844 [2024-08-14 06:38:01.029049] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:33.844 [2024-08-14 06:38:01.029095] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:33.844 [2024-08-14 06:38:01.029332] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:33.844 [2024-08-14 06:38:01.029510] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:33.844 [2024-08-14 06:38:01.029567] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:33.844 [2024-08-14 06:38:01.029578] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:33.844 06:38:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:06:34.104 [2024-08-14 06:38:01.248483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:34.104 [2024-08-14 06:38:01.248567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:34.104 [2024-08-14 06:38:01.248592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:34.104 [2024-08-14 06:38:01.248601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:34.104 [2024-08-14 06:38:01.250840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:34.104 [2024-08-14 06:38:01.250877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:34.104 [2024-08-14 06:38:01.252511] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0fe0ceb3-ea5b-4537-9be0-2a98b9ecc429 00:06:34.104 [2024-08-14 06:38:01.252566] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0fe0ceb3-ea5b-4537-9be0-2a98b9ecc429 is claimed 00:06:34.104 [2024-08-14 06:38:01.252661] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a0437a14-8e04-4edc-8d8d-e97be77f4d23 00:06:34.104 [2024-08-14 06:38:01.252676] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev a0437a14-8e04-4edc-8d8d-e97be77f4d23 is claimed 00:06:34.104 [2024-08-14 06:38:01.252820] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev a0437a14-8e04-4edc-8d8d-e97be77f4d23 (2) smaller than existing raid bdev Raid (3) 00:06:34.104 [2024-08-14 06:38:01.252850] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:34.104 [2024-08-14 06:38:01.252858] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:34.104 [2024-08-14 06:38:01.253096] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:34.104 [2024-08-14 06:38:01.253282] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:34.104 [2024-08-14 06:38:01.253294] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:34.104 [2024-08-14 06:38:01.253408] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:34.104 pt0 00:06:34.104 06:38:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:34.104 06:38:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:34.104 06:38:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:34.104 06:38:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # jq '.[].num_blocks' 00:06:34.364 [2024-08-14 06:38:01.464525] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:34.364 06:38:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:34.364 06:38:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:34.364 06:38:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # (( 196608 == 196608 )) 00:06:34.364 06:38:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 70568 00:06:34.365 06:38:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 70568 ']' 00:06:34.365 06:38:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # kill -0 70568 00:06:34.365 06:38:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@951 -- # uname 00:06:34.365 06:38:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:34.365 06:38:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70568 00:06:34.365 killing process with pid 70568 00:06:34.365 06:38:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:34.365 06:38:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:34.365 06:38:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70568' 00:06:34.365 06:38:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@965 -- # kill 70568 00:06:34.365 [2024-08-14 06:38:01.529621] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:34.365 [2024-08-14 06:38:01.529709] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:34.365 [2024-08-14 06:38:01.529767] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:34.365 [2024-08-14 06:38:01.529777] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:34.365 06:38:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # wait 70568 00:06:34.633 [2024-08-14 06:38:01.691142] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:34.918 06:38:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:06:34.918 00:06:34.918 real 0m5.244s 00:06:34.918 user 0m8.551s 00:06:34.918 sys 0m0.886s 00:06:34.918 06:38:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.918 06:38:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.918 ************************************ 00:06:34.918 END TEST raid1_resize_superblock_test 00:06:34.918 ************************************ 00:06:34.918 06:38:01 bdev_raid -- bdev/bdev_raid.sh@935 -- # uname -s 00:06:34.918 06:38:01 bdev_raid -- bdev/bdev_raid.sh@935 -- # '[' Linux = Linux ']' 00:06:34.918 06:38:01 bdev_raid -- bdev/bdev_raid.sh@935 -- # modprobe -n nbd 00:06:34.918 06:38:01 bdev_raid -- bdev/bdev_raid.sh@936 -- # has_nbd=true 00:06:34.918 06:38:01 bdev_raid -- bdev/bdev_raid.sh@937 -- # modprobe nbd 00:06:34.918 06:38:02 bdev_raid -- bdev/bdev_raid.sh@938 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:34.918 06:38:02 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:34.918 06:38:02 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.918 06:38:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:34.918 ************************************ 00:06:34.918 START TEST raid_function_test_raid0 00:06:34.918 ************************************ 00:06:34.918 06:38:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1121 -- # raid_function_test raid0 00:06:34.918 06:38:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:06:34.918 06:38:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:06:34.918 06:38:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:06:34.918 06:38:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=70687 00:06:34.918 Process raid pid: 70687 00:06:34.918 06:38:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:34.918 06:38:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 70687' 00:06:34.918 06:38:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 70687 /var/tmp/spdk-raid.sock 00:06:34.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:34.918 06:38:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@827 -- # '[' -z 70687 ']' 00:06:34.918 06:38:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:34.918 06:38:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.918 06:38:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:34.918 06:38:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.918 06:38:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:34.918 [2024-08-14 06:38:02.097378] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:34.918 [2024-08-14 06:38:02.097956] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.178 [2024-08-14 06:38:02.243285] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.178 [2024-08-14 06:38:02.294427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.178 [2024-08-14 06:38:02.337023] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:35.178 [2024-08-14 06:38:02.337107] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:35.748 06:38:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:35.748 06:38:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # return 0 00:06:35.748 06:38:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:06:35.748 06:38:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:06:35.748 06:38:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:06:35.748 06:38:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:06:35.748 06:38:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:06:36.008 [2024-08-14 06:38:03.176875] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:36.008 [2024-08-14 06:38:03.178898] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:36.008 [2024-08-14 06:38:03.178991] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:36.009 [2024-08-14 06:38:03.179003] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:36.009 [2024-08-14 06:38:03.179376] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:36.009 [2024-08-14 06:38:03.179539] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:36.009 [2024-08-14 06:38:03.179556] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:36.009 [2024-08-14 06:38:03.179728] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:36.009 Base_1 00:06:36.009 Base_2 00:06:36.009 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:06:36.009 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:06:36.009 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:06:36.269 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:06:36.269 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:06:36.269 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:06:36.269 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:36.269 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:36.269 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.269 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:36.269 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.269 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:36.269 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.269 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:36.269 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:06:36.528 [2024-08-14 06:38:03.608167] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:36.528 /dev/nbd0 00:06:36.528 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.528 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.528 06:38:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:36.528 06:38:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@865 -- # local i 00:06:36.528 06:38:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:36.528 06:38:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:36.528 06:38:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:36.529 06:38:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # break 00:06:36.529 06:38:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:36.529 06:38:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:36.529 06:38:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.529 1+0 records in 00:06:36.529 1+0 records out 00:06:36.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022602 s, 18.1 MB/s 00:06:36.529 06:38:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.529 06:38:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # size=4096 00:06:36.529 06:38:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.529 06:38:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:36.529 06:38:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # return 0 00:06:36.529 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.529 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:36.529 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:06:36.529 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:36.529 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.788 { 00:06:36.788 "nbd_device": "/dev/nbd0", 00:06:36.788 "bdev_name": "raid" 00:06:36.788 } 00:06:36.788 ]' 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.788 { 00:06:36.788 "nbd_device": "/dev/nbd0", 00:06:36.788 "bdev_name": "raid" 00:06:36.788 } 00:06:36.788 ]' 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:36.788 4096+0 records in 00:06:36.788 4096+0 records out 00:06:36.788 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.031999 s, 65.5 MB/s 00:06:36.788 06:38:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:37.049 4096+0 records in 00:06:37.049 4096+0 records out 00:06:37.049 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.195098 s, 10.7 MB/s 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:37.049 128+0 records in 00:06:37.049 128+0 records out 00:06:37.049 65536 bytes (66 kB, 64 KiB) copied, 0.000585745 s, 112 MB/s 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:37.049 2035+0 records in 00:06:37.049 2035+0 records out 00:06:37.049 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0154045 s, 67.6 MB/s 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:37.049 456+0 records in 00:06:37.049 456+0 records out 00:06:37.049 233472 bytes (233 kB, 228 KiB) copied, 0.00364309 s, 64.1 MB/s 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.049 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:06:37.309 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.309 [2024-08-14 06:38:04.491847] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.309 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.309 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.309 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.309 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.309 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.309 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:37.309 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.309 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:06:37.309 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:37.309 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:06:37.569 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.569 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.569 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.569 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 70687 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@946 -- # '[' -z 70687 ']' 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # kill -0 70687 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@951 -- # uname 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70687 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70687' 00:06:37.570 killing process with pid 70687 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@965 -- # kill 70687 00:06:37.570 [2024-08-14 06:38:04.805799] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:37.570 [2024-08-14 06:38:04.805965] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:37.570 06:38:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # wait 70687 00:06:37.570 [2024-08-14 06:38:04.806063] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:37.570 [2024-08-14 06:38:04.806077] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:37.830 [2024-08-14 06:38:04.829504] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:37.830 06:38:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:06:37.830 00:06:37.830 real 0m3.050s 00:06:37.830 user 0m4.046s 00:06:37.830 sys 0m0.942s 00:06:37.830 06:38:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.830 06:38:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:37.830 ************************************ 00:06:37.830 END TEST raid_function_test_raid0 00:06:37.830 ************************************ 00:06:38.090 06:38:05 bdev_raid -- bdev/bdev_raid.sh@939 -- # run_test raid_function_test_concat raid_function_test concat 00:06:38.090 06:38:05 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:38.090 06:38:05 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.090 06:38:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:38.090 ************************************ 00:06:38.090 START TEST raid_function_test_concat 00:06:38.090 ************************************ 00:06:38.090 06:38:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1121 -- # raid_function_test concat 00:06:38.090 06:38:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:06:38.090 06:38:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:06:38.090 06:38:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:06:38.090 06:38:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=70811 00:06:38.090 06:38:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:38.090 Process raid pid: 70811 00:06:38.090 06:38:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 70811' 00:06:38.090 06:38:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 70811 /var/tmp/spdk-raid.sock 00:06:38.090 06:38:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@827 -- # '[' -z 70811 ']' 00:06:38.090 06:38:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:38.090 06:38:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.090 06:38:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:38.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:38.090 06:38:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.090 06:38:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:38.090 [2024-08-14 06:38:05.210739] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:38.090 [2024-08-14 06:38:05.210969] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.350 [2024-08-14 06:38:05.358231] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.350 [2024-08-14 06:38:05.407122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.350 [2024-08-14 06:38:05.450155] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:38.350 [2024-08-14 06:38:05.450296] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:38.920 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.920 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # return 0 00:06:38.920 06:38:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:06:38.920 06:38:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:06:38.920 06:38:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:06:38.920 06:38:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:06:38.920 06:38:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:06:39.180 [2024-08-14 06:38:06.290983] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:39.180 [2024-08-14 06:38:06.293267] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:39.180 [2024-08-14 06:38:06.293389] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:39.180 [2024-08-14 06:38:06.293402] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:39.180 [2024-08-14 06:38:06.293721] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:39.180 [2024-08-14 06:38:06.293849] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:39.180 [2024-08-14 06:38:06.293864] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:39.180 [2024-08-14 06:38:06.293990] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:39.180 Base_1 00:06:39.180 Base_2 00:06:39.180 06:38:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:06:39.180 06:38:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:06:39.180 06:38:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:06:39.440 06:38:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:06:39.440 06:38:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:06:39.440 06:38:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:06:39.440 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:39.440 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:39.440 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.440 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:39.440 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.440 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:39.440 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.440 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:39.440 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:06:39.700 [2024-08-14 06:38:06.710325] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:39.700 /dev/nbd0 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@865 -- # local i 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # break 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.700 1+0 records in 00:06:39.700 1+0 records out 00:06:39.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260784 s, 15.7 MB/s 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # size=4096 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # return 0 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:39.700 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:06:39.960 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.960 { 00:06:39.960 "nbd_device": "/dev/nbd0", 00:06:39.960 "bdev_name": "raid" 00:06:39.960 } 00:06:39.960 ]' 00:06:39.960 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.960 { 00:06:39.960 "nbd_device": "/dev/nbd0", 00:06:39.960 "bdev_name": "raid" 00:06:39.960 } 00:06:39.960 ]' 00:06:39.960 06:38:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.960 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:39.960 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.960 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:39.960 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:39.960 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:39.960 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:06:39.960 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:06:39.960 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:06:39.960 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:06:39.960 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:06:39.960 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:39.960 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:06:39.961 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:39.961 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:06:39.961 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:06:39.961 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:06:39.961 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:06:39.961 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:06:39.961 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:06:39.961 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:06:39.961 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:06:39.961 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:06:39.961 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:06:39.961 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:06:39.961 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:39.961 4096+0 records in 00:06:39.961 4096+0 records out 00:06:39.961 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0338067 s, 62.0 MB/s 00:06:39.961 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:40.220 4096+0 records in 00:06:40.220 4096+0 records out 00:06:40.220 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.193467 s, 10.8 MB/s 00:06:40.220 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:40.221 128+0 records in 00:06:40.221 128+0 records out 00:06:40.221 65536 bytes (66 kB, 64 KiB) copied, 0.0012894 s, 50.8 MB/s 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:40.221 2035+0 records in 00:06:40.221 2035+0 records out 00:06:40.221 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0138973 s, 75.0 MB/s 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:40.221 456+0 records in 00:06:40.221 456+0 records out 00:06:40.221 233472 bytes (233 kB, 228 KiB) copied, 0.00323488 s, 72.2 MB/s 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.221 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:06:40.480 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.480 [2024-08-14 06:38:07.636795] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:40.480 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.480 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.480 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.480 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.481 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.481 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:40.481 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.481 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:06:40.481 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:40.481 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 70811 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@946 -- # '[' -z 70811 ']' 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # kill -0 70811 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@951 -- # uname 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70811 00:06:40.741 killing process with pid 70811 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70811' 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@965 -- # kill 70811 00:06:40.741 [2024-08-14 06:38:07.966676] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:40.741 [2024-08-14 06:38:07.966782] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:40.741 06:38:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # wait 70811 00:06:40.741 [2024-08-14 06:38:07.966855] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:40.741 [2024-08-14 06:38:07.966871] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:40.741 [2024-08-14 06:38:07.990360] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:41.001 06:38:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:06:41.001 00:06:41.001 real 0m3.102s 00:06:41.001 user 0m4.120s 00:06:41.001 sys 0m0.974s 00:06:41.001 06:38:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.001 06:38:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:41.001 ************************************ 00:06:41.001 END TEST raid_function_test_concat 00:06:41.001 ************************************ 00:06:41.261 06:38:08 bdev_raid -- bdev/bdev_raid.sh@942 -- # run_test raid0_resize_test raid_resize_test 0 00:06:41.261 06:38:08 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:41.261 06:38:08 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.261 06:38:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:41.261 ************************************ 00:06:41.261 START TEST raid0_resize_test 00:06:41.261 ************************************ 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1121 -- # raid_resize_test 0 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=0 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:06:41.261 Process raid pid: 70932 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=70932 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 70932' 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 70932 /var/tmp/spdk-raid.sock 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@827 -- # '[' -z 70932 ']' 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:41.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.261 06:38:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.261 [2024-08-14 06:38:08.379672] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:41.261 [2024-08-14 06:38:08.379825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.525 [2024-08-14 06:38:08.526688] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.525 [2024-08-14 06:38:08.577794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.525 [2024-08-14 06:38:08.620425] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.525 [2024-08-14 06:38:08.620462] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.099 06:38:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.099 06:38:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # return 0 00:06:42.099 06:38:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:06:42.359 Base_1 00:06:42.359 06:38:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:06:42.359 Base_2 00:06:42.359 06:38:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 0 -eq 0 ']' 00:06:42.359 06:38:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:06:42.619 [2024-08-14 06:38:09.779611] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:42.619 [2024-08-14 06:38:09.781743] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:42.619 [2024-08-14 06:38:09.781822] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:42.619 [2024-08-14 06:38:09.781833] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:42.619 [2024-08-14 06:38:09.782161] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:42.619 [2024-08-14 06:38:09.782319] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:42.619 [2024-08-14 06:38:09.782333] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:42.619 [2024-08-14 06:38:09.782487] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.619 06:38:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:06:42.879 [2024-08-14 06:38:09.979248] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:42.879 [2024-08-14 06:38:09.979293] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:42.879 true 00:06:42.879 06:38:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:42.879 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:06:43.138 [2024-08-14 06:38:10.187050] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.138 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=131072 00:06:43.138 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=64 00:06:43.138 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 0 -eq 0 ']' 00:06:43.138 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # expected_size=64 00:06:43.138 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 64 '!=' 64 ']' 00:06:43.138 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:06:43.398 [2024-08-14 06:38:10.394502] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:43.398 [2024-08-14 06:38:10.394546] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:43.398 [2024-08-14 06:38:10.394579] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:43.398 true 00:06:43.398 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:43.398 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:06:43.398 [2024-08-14 06:38:10.590379] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.398 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=262144 00:06:43.398 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=128 00:06:43.398 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 0 -eq 0 ']' 00:06:43.398 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@393 -- # expected_size=128 00:06:43.398 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 128 '!=' 128 ']' 00:06:43.398 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 70932 00:06:43.398 06:38:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@946 -- # '[' -z 70932 ']' 00:06:43.398 06:38:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # kill -0 70932 00:06:43.398 06:38:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # uname 00:06:43.398 06:38:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.398 06:38:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70932 00:06:43.658 06:38:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:43.658 06:38:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:43.658 06:38:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70932' 00:06:43.658 killing process with pid 70932 00:06:43.658 06:38:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@965 -- # kill 70932 00:06:43.658 [2024-08-14 06:38:10.654425] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.658 [2024-08-14 06:38:10.654591] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.658 06:38:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # wait 70932 00:06:43.658 [2024-08-14 06:38:10.654683] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.658 [2024-08-14 06:38:10.654732] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:43.658 [2024-08-14 06:38:10.656240] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:43.658 06:38:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:06:43.658 00:06:43.658 real 0m2.592s 00:06:43.658 user 0m3.899s 00:06:43.658 sys 0m0.420s 00:06:43.658 06:38:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.658 06:38:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.658 ************************************ 00:06:43.658 END TEST raid0_resize_test 00:06:43.658 ************************************ 00:06:43.919 06:38:10 bdev_raid -- bdev/bdev_raid.sh@943 -- # run_test raid1_resize_test raid_resize_test 1 00:06:43.919 06:38:10 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:43.919 06:38:10 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.919 06:38:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:43.919 ************************************ 00:06:43.919 START TEST raid1_resize_test 00:06:43.919 ************************************ 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1121 -- # raid_resize_test 1 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=1 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:06:43.919 Process raid pid: 71000 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=71000 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 71000' 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 71000 /var/tmp/spdk-raid.sock 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@827 -- # '[' -z 71000 ']' 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:43.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.919 06:38:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.919 [2024-08-14 06:38:11.033233] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:43.919 [2024-08-14 06:38:11.033349] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.179 [2024-08-14 06:38:11.180568] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.179 [2024-08-14 06:38:11.226249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.179 [2024-08-14 06:38:11.268538] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.179 [2024-08-14 06:38:11.268571] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.749 06:38:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.749 06:38:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # return 0 00:06:44.749 06:38:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:06:45.009 Base_1 00:06:45.009 06:38:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:06:45.268 Base_2 00:06:45.268 06:38:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 1 -eq 0 ']' 00:06:45.268 06:38:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@367 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r 1 -b 'Base_1 Base_2' -n Raid 00:06:45.268 [2024-08-14 06:38:12.495386] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:45.268 [2024-08-14 06:38:12.497482] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:45.268 [2024-08-14 06:38:12.497667] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:45.268 [2024-08-14 06:38:12.497682] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:45.268 [2024-08-14 06:38:12.498031] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:45.268 [2024-08-14 06:38:12.498146] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:45.268 [2024-08-14 06:38:12.498161] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:45.269 [2024-08-14 06:38:12.498340] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.269 06:38:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:06:45.528 [2024-08-14 06:38:12.718938] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:45.528 [2024-08-14 06:38:12.718980] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:45.528 true 00:06:45.528 06:38:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:06:45.528 06:38:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:45.788 [2024-08-14 06:38:12.926776] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.788 06:38:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=65536 00:06:45.788 06:38:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=32 00:06:45.788 06:38:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 1 -eq 0 ']' 00:06:45.788 06:38:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@379 -- # expected_size=32 00:06:45.788 06:38:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 32 '!=' 32 ']' 00:06:45.788 06:38:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:06:46.049 [2024-08-14 06:38:13.142247] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:46.049 [2024-08-14 06:38:13.142288] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:46.049 [2024-08-14 06:38:13.142317] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:46.049 true 00:06:46.049 06:38:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:06:46.049 06:38:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:46.308 [2024-08-14 06:38:13.366008] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=131072 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=64 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 1 -eq 0 ']' 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@395 -- # expected_size=64 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 64 '!=' 64 ']' 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 71000 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@946 -- # '[' -z 71000 ']' 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # kill -0 71000 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@951 -- # uname 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71000 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71000' 00:06:46.308 killing process with pid 71000 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@965 -- # kill 71000 00:06:46.308 [2024-08-14 06:38:13.429224] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:46.308 06:38:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # wait 71000 00:06:46.308 [2024-08-14 06:38:13.429443] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:46.308 [2024-08-14 06:38:13.429929] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:46.308 [2024-08-14 06:38:13.430007] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:46.308 [2024-08-14 06:38:13.431245] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:46.568 06:38:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:06:46.568 00:06:46.568 real 0m2.729s 00:06:46.568 user 0m4.134s 00:06:46.568 sys 0m0.475s 00:06:46.568 06:38:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.568 06:38:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.568 ************************************ 00:06:46.568 END TEST raid1_resize_test 00:06:46.568 ************************************ 00:06:46.568 06:38:13 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:06:46.568 06:38:13 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:06:46.568 06:38:13 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:46.568 06:38:13 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:46.568 06:38:13 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.568 06:38:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:46.568 ************************************ 00:06:46.568 START TEST raid_state_function_test 00:06:46.568 ************************************ 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 false 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:06:46.568 Process raid pid: 71068 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=71068 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 71068' 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 71068 /var/tmp/spdk-raid.sock 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 71068 ']' 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:46.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.568 06:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.829 [2024-08-14 06:38:13.839294] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:46.829 [2024-08-14 06:38:13.839527] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.829 [2024-08-14 06:38:13.989151] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.829 [2024-08-14 06:38:14.042028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.090 [2024-08-14 06:38:14.099730] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.090 [2024-08-14 06:38:14.099920] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.660 06:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.660 06:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:06:47.660 06:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:47.660 [2024-08-14 06:38:14.883630] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:47.660 [2024-08-14 06:38:14.883726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:47.660 [2024-08-14 06:38:14.883742] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:47.660 [2024-08-14 06:38:14.883753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:47.660 06:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:47.660 06:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:47.660 06:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:47.660 06:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:47.660 06:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:47.660 06:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:47.660 06:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:47.660 06:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:47.660 06:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:47.660 06:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:47.660 06:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:47.660 06:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:47.920 06:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:47.920 "name": "Existed_Raid", 00:06:47.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:47.920 "strip_size_kb": 64, 00:06:47.920 "state": "configuring", 00:06:47.920 "raid_level": "raid0", 00:06:47.920 "superblock": false, 00:06:47.920 "num_base_bdevs": 2, 00:06:47.921 "num_base_bdevs_discovered": 0, 00:06:47.921 "num_base_bdevs_operational": 2, 00:06:47.921 "base_bdevs_list": [ 00:06:47.921 { 00:06:47.921 "name": "BaseBdev1", 00:06:47.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:47.921 "is_configured": false, 00:06:47.921 "data_offset": 0, 00:06:47.921 "data_size": 0 00:06:47.921 }, 00:06:47.921 { 00:06:47.921 "name": "BaseBdev2", 00:06:47.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:47.921 "is_configured": false, 00:06:47.921 "data_offset": 0, 00:06:47.921 "data_size": 0 00:06:47.921 } 00:06:47.921 ] 00:06:47.921 }' 00:06:47.921 06:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:47.921 06:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.490 06:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:48.749 [2024-08-14 06:38:15.889828] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:48.749 [2024-08-14 06:38:15.889982] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:48.749 06:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:49.008 [2024-08-14 06:38:16.093495] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:49.008 [2024-08-14 06:38:16.093645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:49.008 [2024-08-14 06:38:16.093702] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:49.008 [2024-08-14 06:38:16.093731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:49.008 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:49.267 [2024-08-14 06:38:16.310478] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:49.267 BaseBdev1 00:06:49.267 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:06:49.267 06:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:06:49.267 06:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:06:49.267 06:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:06:49.267 06:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:06:49.267 06:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:06:49.267 06:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:49.525 06:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:49.526 [ 00:06:49.526 { 00:06:49.526 "name": "BaseBdev1", 00:06:49.526 "aliases": [ 00:06:49.526 "14296794-ffed-4b3f-989b-571f7f258581" 00:06:49.526 ], 00:06:49.526 "product_name": "Malloc disk", 00:06:49.526 "block_size": 512, 00:06:49.526 "num_blocks": 65536, 00:06:49.526 "uuid": "14296794-ffed-4b3f-989b-571f7f258581", 00:06:49.526 "assigned_rate_limits": { 00:06:49.526 "rw_ios_per_sec": 0, 00:06:49.526 "rw_mbytes_per_sec": 0, 00:06:49.526 "r_mbytes_per_sec": 0, 00:06:49.526 "w_mbytes_per_sec": 0 00:06:49.526 }, 00:06:49.526 "claimed": true, 00:06:49.526 "claim_type": "exclusive_write", 00:06:49.526 "zoned": false, 00:06:49.526 "supported_io_types": { 00:06:49.526 "read": true, 00:06:49.526 "write": true, 00:06:49.526 "unmap": true, 00:06:49.526 "flush": true, 00:06:49.526 "reset": true, 00:06:49.526 "nvme_admin": false, 00:06:49.526 "nvme_io": false, 00:06:49.526 "nvme_io_md": false, 00:06:49.526 "write_zeroes": true, 00:06:49.526 "zcopy": true, 00:06:49.526 "get_zone_info": false, 00:06:49.526 "zone_management": false, 00:06:49.526 "zone_append": false, 00:06:49.563 "compare": false, 00:06:49.563 "compare_and_write": false, 00:06:49.563 "abort": true, 00:06:49.563 "seek_hole": false, 00:06:49.563 "seek_data": false, 00:06:49.563 "copy": true, 00:06:49.563 "nvme_iov_md": false 00:06:49.563 }, 00:06:49.563 "memory_domains": [ 00:06:49.563 { 00:06:49.563 "dma_device_id": "system", 00:06:49.563 "dma_device_type": 1 00:06:49.563 }, 00:06:49.563 { 00:06:49.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.563 "dma_device_type": 2 00:06:49.563 } 00:06:49.563 ], 00:06:49.563 "driver_specific": {} 00:06:49.563 } 00:06:49.563 ] 00:06:49.563 06:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:06:49.563 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:49.563 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:49.563 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:49.563 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:49.563 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:49.563 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:49.563 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:49.563 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:49.563 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:49.563 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:49.563 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:49.563 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:49.823 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:49.823 "name": "Existed_Raid", 00:06:49.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.823 "strip_size_kb": 64, 00:06:49.823 "state": "configuring", 00:06:49.823 "raid_level": "raid0", 00:06:49.823 "superblock": false, 00:06:49.823 "num_base_bdevs": 2, 00:06:49.823 "num_base_bdevs_discovered": 1, 00:06:49.823 "num_base_bdevs_operational": 2, 00:06:49.823 "base_bdevs_list": [ 00:06:49.824 { 00:06:49.824 "name": "BaseBdev1", 00:06:49.824 "uuid": "14296794-ffed-4b3f-989b-571f7f258581", 00:06:49.824 "is_configured": true, 00:06:49.824 "data_offset": 0, 00:06:49.824 "data_size": 65536 00:06:49.824 }, 00:06:49.824 { 00:06:49.824 "name": "BaseBdev2", 00:06:49.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.824 "is_configured": false, 00:06:49.824 "data_offset": 0, 00:06:49.824 "data_size": 0 00:06:49.824 } 00:06:49.824 ] 00:06:49.824 }' 00:06:49.824 06:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:49.824 06:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.392 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:50.652 [2024-08-14 06:38:17.652278] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:50.652 [2024-08-14 06:38:17.652455] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:50.652 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:50.652 [2024-08-14 06:38:17.880115] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:50.652 [2024-08-14 06:38:17.882388] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:50.652 [2024-08-14 06:38:17.882506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:50.652 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:06:50.652 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:50.652 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:50.652 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:50.652 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:50.652 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:50.652 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:50.652 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:50.652 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:50.652 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:50.652 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:50.652 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:50.652 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:50.652 06:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:50.911 06:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:50.911 "name": "Existed_Raid", 00:06:50.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.911 "strip_size_kb": 64, 00:06:50.911 "state": "configuring", 00:06:50.911 "raid_level": "raid0", 00:06:50.911 "superblock": false, 00:06:50.911 "num_base_bdevs": 2, 00:06:50.911 "num_base_bdevs_discovered": 1, 00:06:50.911 "num_base_bdevs_operational": 2, 00:06:50.911 "base_bdevs_list": [ 00:06:50.911 { 00:06:50.911 "name": "BaseBdev1", 00:06:50.911 "uuid": "14296794-ffed-4b3f-989b-571f7f258581", 00:06:50.911 "is_configured": true, 00:06:50.911 "data_offset": 0, 00:06:50.911 "data_size": 65536 00:06:50.911 }, 00:06:50.911 { 00:06:50.911 "name": "BaseBdev2", 00:06:50.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.911 "is_configured": false, 00:06:50.911 "data_offset": 0, 00:06:50.911 "data_size": 0 00:06:50.911 } 00:06:50.911 ] 00:06:50.911 }' 00:06:50.911 06:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:50.911 06:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.478 06:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:51.737 [2024-08-14 06:38:18.910115] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:51.737 [2024-08-14 06:38:18.910311] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:51.737 [2024-08-14 06:38:18.910389] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:51.737 [2024-08-14 06:38:18.910827] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:51.737 [2024-08-14 06:38:18.911075] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:51.737 [2024-08-14 06:38:18.911136] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:51.737 [2024-08-14 06:38:18.911524] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.737 BaseBdev2 00:06:51.737 06:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:06:51.737 06:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:06:51.737 06:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:06:51.737 06:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:06:51.737 06:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:06:51.737 06:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:06:51.737 06:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:51.996 06:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:52.256 [ 00:06:52.256 { 00:06:52.256 "name": "BaseBdev2", 00:06:52.256 "aliases": [ 00:06:52.256 "5efbd3ef-8cdb-4b9f-9bb5-b7cdfc781e00" 00:06:52.256 ], 00:06:52.256 "product_name": "Malloc disk", 00:06:52.256 "block_size": 512, 00:06:52.256 "num_blocks": 65536, 00:06:52.256 "uuid": "5efbd3ef-8cdb-4b9f-9bb5-b7cdfc781e00", 00:06:52.256 "assigned_rate_limits": { 00:06:52.256 "rw_ios_per_sec": 0, 00:06:52.256 "rw_mbytes_per_sec": 0, 00:06:52.256 "r_mbytes_per_sec": 0, 00:06:52.256 "w_mbytes_per_sec": 0 00:06:52.256 }, 00:06:52.256 "claimed": true, 00:06:52.256 "claim_type": "exclusive_write", 00:06:52.256 "zoned": false, 00:06:52.256 "supported_io_types": { 00:06:52.256 "read": true, 00:06:52.256 "write": true, 00:06:52.256 "unmap": true, 00:06:52.256 "flush": true, 00:06:52.256 "reset": true, 00:06:52.256 "nvme_admin": false, 00:06:52.256 "nvme_io": false, 00:06:52.256 "nvme_io_md": false, 00:06:52.256 "write_zeroes": true, 00:06:52.256 "zcopy": true, 00:06:52.256 "get_zone_info": false, 00:06:52.256 "zone_management": false, 00:06:52.256 "zone_append": false, 00:06:52.256 "compare": false, 00:06:52.256 "compare_and_write": false, 00:06:52.256 "abort": true, 00:06:52.256 "seek_hole": false, 00:06:52.256 "seek_data": false, 00:06:52.256 "copy": true, 00:06:52.256 "nvme_iov_md": false 00:06:52.256 }, 00:06:52.256 "memory_domains": [ 00:06:52.256 { 00:06:52.256 "dma_device_id": "system", 00:06:52.256 "dma_device_type": 1 00:06:52.256 }, 00:06:52.256 { 00:06:52.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.256 "dma_device_type": 2 00:06:52.256 } 00:06:52.256 ], 00:06:52.256 "driver_specific": {} 00:06:52.256 } 00:06:52.256 ] 00:06:52.256 06:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:06:52.256 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:06:52.256 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:52.256 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:52.256 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:52.256 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:06:52.256 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:52.256 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:52.256 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:52.256 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:52.256 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:52.256 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:52.256 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:52.256 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:52.256 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.515 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:52.515 "name": "Existed_Raid", 00:06:52.515 "uuid": "5b525584-65af-40fc-b7b8-3a840c6d7e26", 00:06:52.515 "strip_size_kb": 64, 00:06:52.515 "state": "online", 00:06:52.515 "raid_level": "raid0", 00:06:52.515 "superblock": false, 00:06:52.515 "num_base_bdevs": 2, 00:06:52.515 "num_base_bdevs_discovered": 2, 00:06:52.515 "num_base_bdevs_operational": 2, 00:06:52.515 "base_bdevs_list": [ 00:06:52.515 { 00:06:52.515 "name": "BaseBdev1", 00:06:52.515 "uuid": "14296794-ffed-4b3f-989b-571f7f258581", 00:06:52.515 "is_configured": true, 00:06:52.515 "data_offset": 0, 00:06:52.515 "data_size": 65536 00:06:52.515 }, 00:06:52.515 { 00:06:52.515 "name": "BaseBdev2", 00:06:52.515 "uuid": "5efbd3ef-8cdb-4b9f-9bb5-b7cdfc781e00", 00:06:52.515 "is_configured": true, 00:06:52.515 "data_offset": 0, 00:06:52.515 "data_size": 65536 00:06:52.515 } 00:06:52.515 ] 00:06:52.515 }' 00:06:52.515 06:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:52.515 06:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.084 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:06:53.084 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:06:53.084 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:06:53.084 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:06:53.084 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:06:53.084 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:06:53.084 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:06:53.084 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:06:53.344 [2024-08-14 06:38:20.424328] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.344 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:06:53.344 "name": "Existed_Raid", 00:06:53.344 "aliases": [ 00:06:53.344 "5b525584-65af-40fc-b7b8-3a840c6d7e26" 00:06:53.344 ], 00:06:53.344 "product_name": "Raid Volume", 00:06:53.344 "block_size": 512, 00:06:53.344 "num_blocks": 131072, 00:06:53.344 "uuid": "5b525584-65af-40fc-b7b8-3a840c6d7e26", 00:06:53.344 "assigned_rate_limits": { 00:06:53.344 "rw_ios_per_sec": 0, 00:06:53.344 "rw_mbytes_per_sec": 0, 00:06:53.344 "r_mbytes_per_sec": 0, 00:06:53.344 "w_mbytes_per_sec": 0 00:06:53.344 }, 00:06:53.344 "claimed": false, 00:06:53.344 "zoned": false, 00:06:53.344 "supported_io_types": { 00:06:53.344 "read": true, 00:06:53.344 "write": true, 00:06:53.344 "unmap": true, 00:06:53.344 "flush": true, 00:06:53.344 "reset": true, 00:06:53.344 "nvme_admin": false, 00:06:53.344 "nvme_io": false, 00:06:53.344 "nvme_io_md": false, 00:06:53.345 "write_zeroes": true, 00:06:53.345 "zcopy": false, 00:06:53.345 "get_zone_info": false, 00:06:53.345 "zone_management": false, 00:06:53.345 "zone_append": false, 00:06:53.345 "compare": false, 00:06:53.345 "compare_and_write": false, 00:06:53.345 "abort": false, 00:06:53.345 "seek_hole": false, 00:06:53.345 "seek_data": false, 00:06:53.345 "copy": false, 00:06:53.345 "nvme_iov_md": false 00:06:53.345 }, 00:06:53.345 "memory_domains": [ 00:06:53.345 { 00:06:53.345 "dma_device_id": "system", 00:06:53.345 "dma_device_type": 1 00:06:53.345 }, 00:06:53.345 { 00:06:53.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.345 "dma_device_type": 2 00:06:53.345 }, 00:06:53.345 { 00:06:53.345 "dma_device_id": "system", 00:06:53.345 "dma_device_type": 1 00:06:53.345 }, 00:06:53.345 { 00:06:53.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.345 "dma_device_type": 2 00:06:53.345 } 00:06:53.345 ], 00:06:53.345 "driver_specific": { 00:06:53.345 "raid": { 00:06:53.345 "uuid": "5b525584-65af-40fc-b7b8-3a840c6d7e26", 00:06:53.345 "strip_size_kb": 64, 00:06:53.345 "state": "online", 00:06:53.345 "raid_level": "raid0", 00:06:53.345 "superblock": false, 00:06:53.345 "num_base_bdevs": 2, 00:06:53.345 "num_base_bdevs_discovered": 2, 00:06:53.345 "num_base_bdevs_operational": 2, 00:06:53.345 "base_bdevs_list": [ 00:06:53.345 { 00:06:53.345 "name": "BaseBdev1", 00:06:53.345 "uuid": "14296794-ffed-4b3f-989b-571f7f258581", 00:06:53.345 "is_configured": true, 00:06:53.345 "data_offset": 0, 00:06:53.345 "data_size": 65536 00:06:53.345 }, 00:06:53.345 { 00:06:53.345 "name": "BaseBdev2", 00:06:53.345 "uuid": "5efbd3ef-8cdb-4b9f-9bb5-b7cdfc781e00", 00:06:53.345 "is_configured": true, 00:06:53.345 "data_offset": 0, 00:06:53.345 "data_size": 65536 00:06:53.345 } 00:06:53.345 ] 00:06:53.345 } 00:06:53.345 } 00:06:53.345 }' 00:06:53.345 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:53.345 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:06:53.345 BaseBdev2' 00:06:53.345 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:53.345 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:53.345 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:06:53.605 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:53.605 "name": "BaseBdev1", 00:06:53.605 "aliases": [ 00:06:53.605 "14296794-ffed-4b3f-989b-571f7f258581" 00:06:53.605 ], 00:06:53.605 "product_name": "Malloc disk", 00:06:53.605 "block_size": 512, 00:06:53.605 "num_blocks": 65536, 00:06:53.605 "uuid": "14296794-ffed-4b3f-989b-571f7f258581", 00:06:53.605 "assigned_rate_limits": { 00:06:53.605 "rw_ios_per_sec": 0, 00:06:53.605 "rw_mbytes_per_sec": 0, 00:06:53.605 "r_mbytes_per_sec": 0, 00:06:53.605 "w_mbytes_per_sec": 0 00:06:53.605 }, 00:06:53.605 "claimed": true, 00:06:53.605 "claim_type": "exclusive_write", 00:06:53.605 "zoned": false, 00:06:53.605 "supported_io_types": { 00:06:53.605 "read": true, 00:06:53.605 "write": true, 00:06:53.605 "unmap": true, 00:06:53.605 "flush": true, 00:06:53.605 "reset": true, 00:06:53.605 "nvme_admin": false, 00:06:53.605 "nvme_io": false, 00:06:53.605 "nvme_io_md": false, 00:06:53.605 "write_zeroes": true, 00:06:53.605 "zcopy": true, 00:06:53.605 "get_zone_info": false, 00:06:53.605 "zone_management": false, 00:06:53.605 "zone_append": false, 00:06:53.605 "compare": false, 00:06:53.605 "compare_and_write": false, 00:06:53.605 "abort": true, 00:06:53.605 "seek_hole": false, 00:06:53.605 "seek_data": false, 00:06:53.605 "copy": true, 00:06:53.605 "nvme_iov_md": false 00:06:53.605 }, 00:06:53.605 "memory_domains": [ 00:06:53.605 { 00:06:53.605 "dma_device_id": "system", 00:06:53.605 "dma_device_type": 1 00:06:53.605 }, 00:06:53.605 { 00:06:53.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.605 "dma_device_type": 2 00:06:53.605 } 00:06:53.605 ], 00:06:53.605 "driver_specific": {} 00:06:53.605 }' 00:06:53.605 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:53.605 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:53.605 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:53.605 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:53.605 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:53.865 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:53.865 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:53.865 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:53.865 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:53.865 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:53.865 06:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:53.865 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:53.865 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:53.865 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:06:53.865 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:54.125 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:54.125 "name": "BaseBdev2", 00:06:54.125 "aliases": [ 00:06:54.125 "5efbd3ef-8cdb-4b9f-9bb5-b7cdfc781e00" 00:06:54.125 ], 00:06:54.125 "product_name": "Malloc disk", 00:06:54.125 "block_size": 512, 00:06:54.125 "num_blocks": 65536, 00:06:54.125 "uuid": "5efbd3ef-8cdb-4b9f-9bb5-b7cdfc781e00", 00:06:54.125 "assigned_rate_limits": { 00:06:54.125 "rw_ios_per_sec": 0, 00:06:54.125 "rw_mbytes_per_sec": 0, 00:06:54.125 "r_mbytes_per_sec": 0, 00:06:54.125 "w_mbytes_per_sec": 0 00:06:54.125 }, 00:06:54.125 "claimed": true, 00:06:54.125 "claim_type": "exclusive_write", 00:06:54.125 "zoned": false, 00:06:54.125 "supported_io_types": { 00:06:54.125 "read": true, 00:06:54.125 "write": true, 00:06:54.125 "unmap": true, 00:06:54.125 "flush": true, 00:06:54.125 "reset": true, 00:06:54.125 "nvme_admin": false, 00:06:54.125 "nvme_io": false, 00:06:54.125 "nvme_io_md": false, 00:06:54.125 "write_zeroes": true, 00:06:54.125 "zcopy": true, 00:06:54.125 "get_zone_info": false, 00:06:54.125 "zone_management": false, 00:06:54.125 "zone_append": false, 00:06:54.125 "compare": false, 00:06:54.125 "compare_and_write": false, 00:06:54.125 "abort": true, 00:06:54.125 "seek_hole": false, 00:06:54.125 "seek_data": false, 00:06:54.125 "copy": true, 00:06:54.125 "nvme_iov_md": false 00:06:54.125 }, 00:06:54.125 "memory_domains": [ 00:06:54.125 { 00:06:54.125 "dma_device_id": "system", 00:06:54.125 "dma_device_type": 1 00:06:54.125 }, 00:06:54.125 { 00:06:54.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.125 "dma_device_type": 2 00:06:54.125 } 00:06:54.125 ], 00:06:54.125 "driver_specific": {} 00:06:54.125 }' 00:06:54.125 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:54.125 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:54.125 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:54.125 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:54.125 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:54.384 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:54.384 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:54.384 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:54.384 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:54.384 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:54.384 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:54.385 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:54.385 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:54.644 [2024-08-14 06:38:21.778309] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:54.644 [2024-08-14 06:38:21.778351] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:54.644 [2024-08-14 06:38:21.778409] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:54.644 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.904 06:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:54.904 "name": "Existed_Raid", 00:06:54.904 "uuid": "5b525584-65af-40fc-b7b8-3a840c6d7e26", 00:06:54.904 "strip_size_kb": 64, 00:06:54.904 "state": "offline", 00:06:54.904 "raid_level": "raid0", 00:06:54.904 "superblock": false, 00:06:54.904 "num_base_bdevs": 2, 00:06:54.904 "num_base_bdevs_discovered": 1, 00:06:54.904 "num_base_bdevs_operational": 1, 00:06:54.904 "base_bdevs_list": [ 00:06:54.904 { 00:06:54.904 "name": null, 00:06:54.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.904 "is_configured": false, 00:06:54.904 "data_offset": 0, 00:06:54.904 "data_size": 65536 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "name": "BaseBdev2", 00:06:54.904 "uuid": "5efbd3ef-8cdb-4b9f-9bb5-b7cdfc781e00", 00:06:54.904 "is_configured": true, 00:06:54.904 "data_offset": 0, 00:06:54.904 "data_size": 65536 00:06:54.904 } 00:06:54.904 ] 00:06:54.904 }' 00:06:54.904 06:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:54.904 06:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.474 06:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:06:55.474 06:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:06:55.474 06:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:55.474 06:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:06:55.733 06:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:06:55.733 06:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:55.733 06:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:55.733 [2024-08-14 06:38:22.908052] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:55.733 [2024-08-14 06:38:22.908254] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:55.733 06:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:06:55.733 06:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:06:55.733 06:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:55.733 06:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:06:55.993 06:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:06:55.993 06:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:06:55.993 06:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:06:55.993 06:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 71068 00:06:55.993 06:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 71068 ']' 00:06:55.993 06:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 71068 00:06:55.993 06:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:06:55.993 06:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:55.993 06:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71068 00:06:55.993 killing process with pid 71068 00:06:55.993 06:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:55.993 06:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:55.993 06:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71068' 00:06:55.993 06:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 71068 00:06:55.993 [2024-08-14 06:38:23.166356] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:55.993 06:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 71068 00:06:55.993 [2024-08-14 06:38:23.167442] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:06:56.253 00:06:56.253 real 0m9.657s 00:06:56.253 user 0m17.296s 00:06:56.253 sys 0m1.474s 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.253 ************************************ 00:06:56.253 END TEST raid_state_function_test 00:06:56.253 ************************************ 00:06:56.253 06:38:23 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:56.253 06:38:23 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:56.253 06:38:23 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.253 06:38:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:56.253 ************************************ 00:06:56.253 START TEST raid_state_function_test_sb 00:06:56.253 ************************************ 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 true 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=71414 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 71414' 00:06:56.253 Process raid pid: 71414 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 71414 /var/tmp/spdk-raid.sock 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 71414 ']' 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:56.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.253 06:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.512 [2024-08-14 06:38:23.555733] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:06:56.512 [2024-08-14 06:38:23.555921] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.512 [2024-08-14 06:38:23.683299] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.512 [2024-08-14 06:38:23.733887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.771 [2024-08-14 06:38:23.778277] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.771 [2024-08-14 06:38:23.778403] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.338 06:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:57.338 06:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:06:57.338 06:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:57.596 [2024-08-14 06:38:24.594700] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:57.596 [2024-08-14 06:38:24.594830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:57.596 [2024-08-14 06:38:24.594851] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:57.596 [2024-08-14 06:38:24.594862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:57.596 06:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:57.596 06:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:57.596 06:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:57.596 06:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:57.596 06:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:57.596 06:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:57.596 06:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:57.596 06:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:57.596 06:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:57.596 06:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:57.596 06:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:57.596 06:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.596 06:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:57.596 "name": "Existed_Raid", 00:06:57.596 "uuid": "776d14a3-6f09-4b8b-b2f8-6b860a211108", 00:06:57.596 "strip_size_kb": 64, 00:06:57.596 "state": "configuring", 00:06:57.596 "raid_level": "raid0", 00:06:57.596 "superblock": true, 00:06:57.596 "num_base_bdevs": 2, 00:06:57.596 "num_base_bdevs_discovered": 0, 00:06:57.596 "num_base_bdevs_operational": 2, 00:06:57.596 "base_bdevs_list": [ 00:06:57.596 { 00:06:57.596 "name": "BaseBdev1", 00:06:57.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.596 "is_configured": false, 00:06:57.596 "data_offset": 0, 00:06:57.596 "data_size": 0 00:06:57.596 }, 00:06:57.596 { 00:06:57.596 "name": "BaseBdev2", 00:06:57.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.596 "is_configured": false, 00:06:57.596 "data_offset": 0, 00:06:57.596 "data_size": 0 00:06:57.596 } 00:06:57.596 ] 00:06:57.596 }' 00:06:57.596 06:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:57.596 06:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.164 06:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:58.423 [2024-08-14 06:38:25.528986] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:58.423 [2024-08-14 06:38:25.529116] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:58.423 06:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:58.682 [2024-08-14 06:38:25.732683] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:58.682 [2024-08-14 06:38:25.732828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:58.682 [2024-08-14 06:38:25.732886] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:58.682 [2024-08-14 06:38:25.732914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:58.682 06:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:58.941 [2024-08-14 06:38:25.949642] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:58.941 BaseBdev1 00:06:58.941 06:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:06:58.941 06:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:06:58.941 06:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:06:58.942 06:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:06:58.942 06:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:06:58.942 06:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:06:58.942 06:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:58.942 06:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:59.201 [ 00:06:59.201 { 00:06:59.201 "name": "BaseBdev1", 00:06:59.201 "aliases": [ 00:06:59.201 "d58b649e-a674-4cbd-9f76-eb7ba30121e8" 00:06:59.201 ], 00:06:59.201 "product_name": "Malloc disk", 00:06:59.201 "block_size": 512, 00:06:59.201 "num_blocks": 65536, 00:06:59.201 "uuid": "d58b649e-a674-4cbd-9f76-eb7ba30121e8", 00:06:59.201 "assigned_rate_limits": { 00:06:59.201 "rw_ios_per_sec": 0, 00:06:59.201 "rw_mbytes_per_sec": 0, 00:06:59.201 "r_mbytes_per_sec": 0, 00:06:59.201 "w_mbytes_per_sec": 0 00:06:59.201 }, 00:06:59.201 "claimed": true, 00:06:59.201 "claim_type": "exclusive_write", 00:06:59.201 "zoned": false, 00:06:59.201 "supported_io_types": { 00:06:59.201 "read": true, 00:06:59.201 "write": true, 00:06:59.201 "unmap": true, 00:06:59.201 "flush": true, 00:06:59.201 "reset": true, 00:06:59.201 "nvme_admin": false, 00:06:59.201 "nvme_io": false, 00:06:59.201 "nvme_io_md": false, 00:06:59.201 "write_zeroes": true, 00:06:59.201 "zcopy": true, 00:06:59.201 "get_zone_info": false, 00:06:59.201 "zone_management": false, 00:06:59.201 "zone_append": false, 00:06:59.201 "compare": false, 00:06:59.201 "compare_and_write": false, 00:06:59.201 "abort": true, 00:06:59.201 "seek_hole": false, 00:06:59.201 "seek_data": false, 00:06:59.201 "copy": true, 00:06:59.201 "nvme_iov_md": false 00:06:59.201 }, 00:06:59.201 "memory_domains": [ 00:06:59.201 { 00:06:59.201 "dma_device_id": "system", 00:06:59.201 "dma_device_type": 1 00:06:59.201 }, 00:06:59.201 { 00:06:59.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.201 "dma_device_type": 2 00:06:59.201 } 00:06:59.201 ], 00:06:59.201 "driver_specific": {} 00:06:59.201 } 00:06:59.201 ] 00:06:59.201 06:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:06:59.201 06:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:59.201 06:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:59.201 06:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:59.201 06:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:59.201 06:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:59.201 06:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:59.201 06:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:59.201 06:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:59.201 06:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:59.201 06:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:59.201 06:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.201 06:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:59.460 06:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:59.460 "name": "Existed_Raid", 00:06:59.460 "uuid": "01784e96-9d45-48e3-a4f7-1f0354452cdc", 00:06:59.460 "strip_size_kb": 64, 00:06:59.460 "state": "configuring", 00:06:59.460 "raid_level": "raid0", 00:06:59.460 "superblock": true, 00:06:59.460 "num_base_bdevs": 2, 00:06:59.460 "num_base_bdevs_discovered": 1, 00:06:59.460 "num_base_bdevs_operational": 2, 00:06:59.460 "base_bdevs_list": [ 00:06:59.460 { 00:06:59.460 "name": "BaseBdev1", 00:06:59.460 "uuid": "d58b649e-a674-4cbd-9f76-eb7ba30121e8", 00:06:59.460 "is_configured": true, 00:06:59.460 "data_offset": 2048, 00:06:59.460 "data_size": 63488 00:06:59.460 }, 00:06:59.460 { 00:06:59.460 "name": "BaseBdev2", 00:06:59.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.460 "is_configured": false, 00:06:59.460 "data_offset": 0, 00:06:59.460 "data_size": 0 00:06:59.460 } 00:06:59.460 ] 00:06:59.460 }' 00:06:59.460 06:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:59.460 06:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.028 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:00.028 [2024-08-14 06:38:27.259594] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:00.028 [2024-08-14 06:38:27.259756] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:00.028 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:00.287 [2024-08-14 06:38:27.463308] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:00.287 [2024-08-14 06:38:27.465453] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:00.287 [2024-08-14 06:38:27.465552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:00.287 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:00.287 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:00.287 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:00.287 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:00.287 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:00.287 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:00.287 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:00.287 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:00.287 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:00.287 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:00.288 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:00.288 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:00.288 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.288 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:00.547 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:00.547 "name": "Existed_Raid", 00:07:00.547 "uuid": "373842d0-4871-48bc-9995-a9f63812f954", 00:07:00.547 "strip_size_kb": 64, 00:07:00.547 "state": "configuring", 00:07:00.547 "raid_level": "raid0", 00:07:00.547 "superblock": true, 00:07:00.547 "num_base_bdevs": 2, 00:07:00.547 "num_base_bdevs_discovered": 1, 00:07:00.547 "num_base_bdevs_operational": 2, 00:07:00.547 "base_bdevs_list": [ 00:07:00.547 { 00:07:00.547 "name": "BaseBdev1", 00:07:00.547 "uuid": "d58b649e-a674-4cbd-9f76-eb7ba30121e8", 00:07:00.547 "is_configured": true, 00:07:00.547 "data_offset": 2048, 00:07:00.547 "data_size": 63488 00:07:00.547 }, 00:07:00.547 { 00:07:00.547 "name": "BaseBdev2", 00:07:00.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.547 "is_configured": false, 00:07:00.547 "data_offset": 0, 00:07:00.547 "data_size": 0 00:07:00.547 } 00:07:00.547 ] 00:07:00.547 }' 00:07:00.547 06:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:00.547 06:38:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.114 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:01.373 [2024-08-14 06:38:28.419812] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:01.373 [2024-08-14 06:38:28.420051] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:01.373 [2024-08-14 06:38:28.420071] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:01.373 [2024-08-14 06:38:28.420412] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:01.373 [2024-08-14 06:38:28.420597] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:01.373 [2024-08-14 06:38:28.420617] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:01.373 BaseBdev2 00:07:01.373 [2024-08-14 06:38:28.420770] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.373 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:01.373 06:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:07:01.373 06:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:07:01.373 06:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:07:01.373 06:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:07:01.373 06:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:07:01.373 06:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:01.651 06:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:01.651 [ 00:07:01.651 { 00:07:01.651 "name": "BaseBdev2", 00:07:01.651 "aliases": [ 00:07:01.651 "d6e4530e-0112-4117-a8e9-068980485b7a" 00:07:01.651 ], 00:07:01.651 "product_name": "Malloc disk", 00:07:01.651 "block_size": 512, 00:07:01.651 "num_blocks": 65536, 00:07:01.651 "uuid": "d6e4530e-0112-4117-a8e9-068980485b7a", 00:07:01.651 "assigned_rate_limits": { 00:07:01.651 "rw_ios_per_sec": 0, 00:07:01.651 "rw_mbytes_per_sec": 0, 00:07:01.651 "r_mbytes_per_sec": 0, 00:07:01.651 "w_mbytes_per_sec": 0 00:07:01.651 }, 00:07:01.651 "claimed": true, 00:07:01.651 "claim_type": "exclusive_write", 00:07:01.651 "zoned": false, 00:07:01.651 "supported_io_types": { 00:07:01.651 "read": true, 00:07:01.651 "write": true, 00:07:01.651 "unmap": true, 00:07:01.651 "flush": true, 00:07:01.651 "reset": true, 00:07:01.651 "nvme_admin": false, 00:07:01.651 "nvme_io": false, 00:07:01.651 "nvme_io_md": false, 00:07:01.651 "write_zeroes": true, 00:07:01.651 "zcopy": true, 00:07:01.651 "get_zone_info": false, 00:07:01.651 "zone_management": false, 00:07:01.651 "zone_append": false, 00:07:01.651 "compare": false, 00:07:01.651 "compare_and_write": false, 00:07:01.651 "abort": true, 00:07:01.651 "seek_hole": false, 00:07:01.651 "seek_data": false, 00:07:01.651 "copy": true, 00:07:01.651 "nvme_iov_md": false 00:07:01.651 }, 00:07:01.651 "memory_domains": [ 00:07:01.651 { 00:07:01.651 "dma_device_id": "system", 00:07:01.651 "dma_device_type": 1 00:07:01.651 }, 00:07:01.651 { 00:07:01.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.651 "dma_device_type": 2 00:07:01.651 } 00:07:01.651 ], 00:07:01.651 "driver_specific": {} 00:07:01.651 } 00:07:01.651 ] 00:07:01.651 06:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:07:01.651 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:01.651 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:01.651 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:01.651 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:01.651 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:01.651 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:01.652 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:01.652 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:01.652 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:01.652 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:01.652 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:01.652 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:01.652 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.652 06:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:01.910 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:01.910 "name": "Existed_Raid", 00:07:01.910 "uuid": "373842d0-4871-48bc-9995-a9f63812f954", 00:07:01.910 "strip_size_kb": 64, 00:07:01.910 "state": "online", 00:07:01.911 "raid_level": "raid0", 00:07:01.911 "superblock": true, 00:07:01.911 "num_base_bdevs": 2, 00:07:01.911 "num_base_bdevs_discovered": 2, 00:07:01.911 "num_base_bdevs_operational": 2, 00:07:01.911 "base_bdevs_list": [ 00:07:01.911 { 00:07:01.911 "name": "BaseBdev1", 00:07:01.911 "uuid": "d58b649e-a674-4cbd-9f76-eb7ba30121e8", 00:07:01.911 "is_configured": true, 00:07:01.911 "data_offset": 2048, 00:07:01.911 "data_size": 63488 00:07:01.911 }, 00:07:01.911 { 00:07:01.911 "name": "BaseBdev2", 00:07:01.911 "uuid": "d6e4530e-0112-4117-a8e9-068980485b7a", 00:07:01.911 "is_configured": true, 00:07:01.911 "data_offset": 2048, 00:07:01.911 "data_size": 63488 00:07:01.911 } 00:07:01.911 ] 00:07:01.911 }' 00:07:01.911 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:01.911 06:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.479 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:02.479 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:02.479 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:02.479 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:02.479 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:02.479 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:07:02.479 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:02.479 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:02.738 [2024-08-14 06:38:29.773896] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.738 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:02.738 "name": "Existed_Raid", 00:07:02.738 "aliases": [ 00:07:02.738 "373842d0-4871-48bc-9995-a9f63812f954" 00:07:02.738 ], 00:07:02.738 "product_name": "Raid Volume", 00:07:02.738 "block_size": 512, 00:07:02.738 "num_blocks": 126976, 00:07:02.738 "uuid": "373842d0-4871-48bc-9995-a9f63812f954", 00:07:02.738 "assigned_rate_limits": { 00:07:02.738 "rw_ios_per_sec": 0, 00:07:02.738 "rw_mbytes_per_sec": 0, 00:07:02.738 "r_mbytes_per_sec": 0, 00:07:02.738 "w_mbytes_per_sec": 0 00:07:02.738 }, 00:07:02.738 "claimed": false, 00:07:02.738 "zoned": false, 00:07:02.738 "supported_io_types": { 00:07:02.738 "read": true, 00:07:02.738 "write": true, 00:07:02.738 "unmap": true, 00:07:02.738 "flush": true, 00:07:02.738 "reset": true, 00:07:02.738 "nvme_admin": false, 00:07:02.738 "nvme_io": false, 00:07:02.738 "nvme_io_md": false, 00:07:02.738 "write_zeroes": true, 00:07:02.738 "zcopy": false, 00:07:02.738 "get_zone_info": false, 00:07:02.738 "zone_management": false, 00:07:02.738 "zone_append": false, 00:07:02.738 "compare": false, 00:07:02.738 "compare_and_write": false, 00:07:02.738 "abort": false, 00:07:02.738 "seek_hole": false, 00:07:02.738 "seek_data": false, 00:07:02.738 "copy": false, 00:07:02.738 "nvme_iov_md": false 00:07:02.738 }, 00:07:02.738 "memory_domains": [ 00:07:02.738 { 00:07:02.738 "dma_device_id": "system", 00:07:02.738 "dma_device_type": 1 00:07:02.738 }, 00:07:02.738 { 00:07:02.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.738 "dma_device_type": 2 00:07:02.738 }, 00:07:02.738 { 00:07:02.738 "dma_device_id": "system", 00:07:02.738 "dma_device_type": 1 00:07:02.738 }, 00:07:02.738 { 00:07:02.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.738 "dma_device_type": 2 00:07:02.738 } 00:07:02.738 ], 00:07:02.738 "driver_specific": { 00:07:02.738 "raid": { 00:07:02.738 "uuid": "373842d0-4871-48bc-9995-a9f63812f954", 00:07:02.738 "strip_size_kb": 64, 00:07:02.738 "state": "online", 00:07:02.738 "raid_level": "raid0", 00:07:02.738 "superblock": true, 00:07:02.738 "num_base_bdevs": 2, 00:07:02.738 "num_base_bdevs_discovered": 2, 00:07:02.738 "num_base_bdevs_operational": 2, 00:07:02.738 "base_bdevs_list": [ 00:07:02.738 { 00:07:02.738 "name": "BaseBdev1", 00:07:02.738 "uuid": "d58b649e-a674-4cbd-9f76-eb7ba30121e8", 00:07:02.738 "is_configured": true, 00:07:02.738 "data_offset": 2048, 00:07:02.738 "data_size": 63488 00:07:02.738 }, 00:07:02.738 { 00:07:02.738 "name": "BaseBdev2", 00:07:02.738 "uuid": "d6e4530e-0112-4117-a8e9-068980485b7a", 00:07:02.738 "is_configured": true, 00:07:02.738 "data_offset": 2048, 00:07:02.738 "data_size": 63488 00:07:02.738 } 00:07:02.738 ] 00:07:02.738 } 00:07:02.738 } 00:07:02.738 }' 00:07:02.738 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:02.738 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:02.738 BaseBdev2' 00:07:02.738 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:02.738 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:02.738 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:02.998 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:02.998 "name": "BaseBdev1", 00:07:02.998 "aliases": [ 00:07:02.998 "d58b649e-a674-4cbd-9f76-eb7ba30121e8" 00:07:02.998 ], 00:07:02.998 "product_name": "Malloc disk", 00:07:02.998 "block_size": 512, 00:07:02.998 "num_blocks": 65536, 00:07:02.998 "uuid": "d58b649e-a674-4cbd-9f76-eb7ba30121e8", 00:07:02.998 "assigned_rate_limits": { 00:07:02.998 "rw_ios_per_sec": 0, 00:07:02.998 "rw_mbytes_per_sec": 0, 00:07:02.998 "r_mbytes_per_sec": 0, 00:07:02.998 "w_mbytes_per_sec": 0 00:07:02.998 }, 00:07:02.998 "claimed": true, 00:07:02.998 "claim_type": "exclusive_write", 00:07:02.998 "zoned": false, 00:07:02.998 "supported_io_types": { 00:07:02.998 "read": true, 00:07:02.998 "write": true, 00:07:02.998 "unmap": true, 00:07:02.998 "flush": true, 00:07:02.998 "reset": true, 00:07:02.998 "nvme_admin": false, 00:07:02.998 "nvme_io": false, 00:07:02.998 "nvme_io_md": false, 00:07:02.998 "write_zeroes": true, 00:07:02.998 "zcopy": true, 00:07:02.998 "get_zone_info": false, 00:07:02.998 "zone_management": false, 00:07:02.998 "zone_append": false, 00:07:02.998 "compare": false, 00:07:02.998 "compare_and_write": false, 00:07:02.998 "abort": true, 00:07:02.998 "seek_hole": false, 00:07:02.998 "seek_data": false, 00:07:02.998 "copy": true, 00:07:02.998 "nvme_iov_md": false 00:07:02.998 }, 00:07:02.998 "memory_domains": [ 00:07:02.998 { 00:07:02.998 "dma_device_id": "system", 00:07:02.998 "dma_device_type": 1 00:07:02.998 }, 00:07:02.998 { 00:07:02.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.998 "dma_device_type": 2 00:07:02.998 } 00:07:02.998 ], 00:07:02.998 "driver_specific": {} 00:07:02.998 }' 00:07:02.998 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:02.998 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:02.998 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:02.998 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:02.998 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:02.998 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:02.998 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:03.256 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:03.256 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:03.256 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:03.256 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:03.256 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:03.256 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:03.256 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:03.256 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:03.514 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:03.514 "name": "BaseBdev2", 00:07:03.514 "aliases": [ 00:07:03.514 "d6e4530e-0112-4117-a8e9-068980485b7a" 00:07:03.514 ], 00:07:03.514 "product_name": "Malloc disk", 00:07:03.514 "block_size": 512, 00:07:03.514 "num_blocks": 65536, 00:07:03.514 "uuid": "d6e4530e-0112-4117-a8e9-068980485b7a", 00:07:03.514 "assigned_rate_limits": { 00:07:03.514 "rw_ios_per_sec": 0, 00:07:03.514 "rw_mbytes_per_sec": 0, 00:07:03.514 "r_mbytes_per_sec": 0, 00:07:03.514 "w_mbytes_per_sec": 0 00:07:03.514 }, 00:07:03.514 "claimed": true, 00:07:03.514 "claim_type": "exclusive_write", 00:07:03.514 "zoned": false, 00:07:03.514 "supported_io_types": { 00:07:03.514 "read": true, 00:07:03.514 "write": true, 00:07:03.514 "unmap": true, 00:07:03.514 "flush": true, 00:07:03.514 "reset": true, 00:07:03.514 "nvme_admin": false, 00:07:03.514 "nvme_io": false, 00:07:03.514 "nvme_io_md": false, 00:07:03.514 "write_zeroes": true, 00:07:03.514 "zcopy": true, 00:07:03.514 "get_zone_info": false, 00:07:03.514 "zone_management": false, 00:07:03.514 "zone_append": false, 00:07:03.514 "compare": false, 00:07:03.514 "compare_and_write": false, 00:07:03.514 "abort": true, 00:07:03.514 "seek_hole": false, 00:07:03.514 "seek_data": false, 00:07:03.514 "copy": true, 00:07:03.514 "nvme_iov_md": false 00:07:03.514 }, 00:07:03.514 "memory_domains": [ 00:07:03.514 { 00:07:03.514 "dma_device_id": "system", 00:07:03.514 "dma_device_type": 1 00:07:03.514 }, 00:07:03.514 { 00:07:03.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.514 "dma_device_type": 2 00:07:03.514 } 00:07:03.514 ], 00:07:03.514 "driver_specific": {} 00:07:03.514 }' 00:07:03.514 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:03.514 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:03.514 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:03.514 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:03.515 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:03.515 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:03.515 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:03.774 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:03.774 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:03.774 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:03.774 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:03.774 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:03.774 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:04.033 [2024-08-14 06:38:31.115625] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:04.033 [2024-08-14 06:38:31.115667] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:04.033 [2024-08-14 06:38:31.115739] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:04.033 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.291 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:04.291 "name": "Existed_Raid", 00:07:04.291 "uuid": "373842d0-4871-48bc-9995-a9f63812f954", 00:07:04.291 "strip_size_kb": 64, 00:07:04.291 "state": "offline", 00:07:04.291 "raid_level": "raid0", 00:07:04.291 "superblock": true, 00:07:04.291 "num_base_bdevs": 2, 00:07:04.291 "num_base_bdevs_discovered": 1, 00:07:04.291 "num_base_bdevs_operational": 1, 00:07:04.291 "base_bdevs_list": [ 00:07:04.291 { 00:07:04.291 "name": null, 00:07:04.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.291 "is_configured": false, 00:07:04.291 "data_offset": 2048, 00:07:04.291 "data_size": 63488 00:07:04.291 }, 00:07:04.291 { 00:07:04.291 "name": "BaseBdev2", 00:07:04.291 "uuid": "d6e4530e-0112-4117-a8e9-068980485b7a", 00:07:04.291 "is_configured": true, 00:07:04.291 "data_offset": 2048, 00:07:04.291 "data_size": 63488 00:07:04.291 } 00:07:04.291 ] 00:07:04.291 }' 00:07:04.291 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:04.291 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.860 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:04.860 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:04.860 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:04.861 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:05.120 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:05.120 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:05.120 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:05.120 [2024-08-14 06:38:32.337129] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:05.120 [2024-08-14 06:38:32.337353] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:05.120 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:05.120 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:05.120 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:05.120 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:05.379 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:05.379 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:05.379 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:05.379 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 71414 00:07:05.379 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 71414 ']' 00:07:05.379 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 71414 00:07:05.379 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:07:05.379 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:05.379 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71414 00:07:05.379 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:05.379 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:05.379 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71414' 00:07:05.379 killing process with pid 71414 00:07:05.379 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 71414 00:07:05.379 [2024-08-14 06:38:32.621383] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.379 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 71414 00:07:05.379 [2024-08-14 06:38:32.622447] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.641 ************************************ 00:07:05.641 END TEST raid_state_function_test_sb 00:07:05.641 ************************************ 00:07:05.641 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:07:05.641 00:07:05.641 real 0m9.385s 00:07:05.641 user 0m16.775s 00:07:05.641 sys 0m1.453s 00:07:05.641 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.641 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.901 06:38:32 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:05.901 06:38:32 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:05.901 06:38:32 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.901 06:38:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.901 ************************************ 00:07:05.901 START TEST raid_superblock_test 00:07:05.901 ************************************ 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 2 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=71753 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 71753 /var/tmp/spdk-raid.sock 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 71753 ']' 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:05.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:05.901 06:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.901 [2024-08-14 06:38:33.011152] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:07:05.901 [2024-08-14 06:38:33.011321] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71753 ] 00:07:06.159 [2024-08-14 06:38:33.161750] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.159 [2024-08-14 06:38:33.208809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.159 [2024-08-14 06:38:33.253292] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.159 [2024-08-14 06:38:33.253331] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.727 06:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:06.727 06:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:07:06.727 06:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:07:06.727 06:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:07:06.727 06:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:07:06.727 06:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:07:06.727 06:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:06.727 06:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:06.727 06:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:07:06.727 06:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:06.727 06:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:06.986 malloc1 00:07:06.986 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:07.267 [2024-08-14 06:38:34.259363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:07.267 [2024-08-14 06:38:34.259567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.267 [2024-08-14 06:38:34.259639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:07.267 [2024-08-14 06:38:34.259692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.267 [2024-08-14 06:38:34.261945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.268 [2024-08-14 06:38:34.262055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:07.268 pt1 00:07:07.268 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:07:07.268 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:07:07.268 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:07:07.268 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:07:07.268 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:07.268 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:07.268 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:07:07.268 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:07.268 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:07.268 malloc2 00:07:07.528 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:07.528 [2024-08-14 06:38:34.723934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:07.528 [2024-08-14 06:38:34.724140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.528 [2024-08-14 06:38:34.724217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:07.528 [2024-08-14 06:38:34.724269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.528 [2024-08-14 06:38:34.726787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.528 [2024-08-14 06:38:34.726883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:07.528 pt2 00:07:07.528 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:07:07.528 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:07:07.528 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:07:07.794 [2024-08-14 06:38:34.943622] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:07.794 [2024-08-14 06:38:34.945567] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:07.794 [2024-08-14 06:38:34.945754] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:07.794 [2024-08-14 06:38:34.945769] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:07.794 [2024-08-14 06:38:34.946118] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:07.794 [2024-08-14 06:38:34.946271] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:07.794 [2024-08-14 06:38:34.946285] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:07.794 [2024-08-14 06:38:34.946463] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.794 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:07.794 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:07.794 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:07.794 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:07.794 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:07.794 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:07.794 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:07.794 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:07.794 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:07.794 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:07.794 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:07.794 06:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:08.066 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:08.066 "name": "raid_bdev1", 00:07:08.066 "uuid": "63e211a8-1deb-49a7-80de-fd9b3086703c", 00:07:08.066 "strip_size_kb": 64, 00:07:08.066 "state": "online", 00:07:08.066 "raid_level": "raid0", 00:07:08.066 "superblock": true, 00:07:08.066 "num_base_bdevs": 2, 00:07:08.066 "num_base_bdevs_discovered": 2, 00:07:08.066 "num_base_bdevs_operational": 2, 00:07:08.066 "base_bdevs_list": [ 00:07:08.066 { 00:07:08.066 "name": "pt1", 00:07:08.066 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.066 "is_configured": true, 00:07:08.066 "data_offset": 2048, 00:07:08.066 "data_size": 63488 00:07:08.066 }, 00:07:08.066 { 00:07:08.066 "name": "pt2", 00:07:08.066 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.066 "is_configured": true, 00:07:08.066 "data_offset": 2048, 00:07:08.066 "data_size": 63488 00:07:08.066 } 00:07:08.066 ] 00:07:08.066 }' 00:07:08.066 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:08.066 06:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.633 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:07:08.633 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:08.633 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:08.633 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:08.633 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:08.633 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:08.633 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:08.633 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:08.891 [2024-08-14 06:38:35.926235] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.891 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:08.891 "name": "raid_bdev1", 00:07:08.891 "aliases": [ 00:07:08.891 "63e211a8-1deb-49a7-80de-fd9b3086703c" 00:07:08.891 ], 00:07:08.891 "product_name": "Raid Volume", 00:07:08.891 "block_size": 512, 00:07:08.891 "num_blocks": 126976, 00:07:08.891 "uuid": "63e211a8-1deb-49a7-80de-fd9b3086703c", 00:07:08.892 "assigned_rate_limits": { 00:07:08.892 "rw_ios_per_sec": 0, 00:07:08.892 "rw_mbytes_per_sec": 0, 00:07:08.892 "r_mbytes_per_sec": 0, 00:07:08.892 "w_mbytes_per_sec": 0 00:07:08.892 }, 00:07:08.892 "claimed": false, 00:07:08.892 "zoned": false, 00:07:08.892 "supported_io_types": { 00:07:08.892 "read": true, 00:07:08.892 "write": true, 00:07:08.892 "unmap": true, 00:07:08.892 "flush": true, 00:07:08.892 "reset": true, 00:07:08.892 "nvme_admin": false, 00:07:08.892 "nvme_io": false, 00:07:08.892 "nvme_io_md": false, 00:07:08.892 "write_zeroes": true, 00:07:08.892 "zcopy": false, 00:07:08.892 "get_zone_info": false, 00:07:08.892 "zone_management": false, 00:07:08.892 "zone_append": false, 00:07:08.892 "compare": false, 00:07:08.892 "compare_and_write": false, 00:07:08.892 "abort": false, 00:07:08.892 "seek_hole": false, 00:07:08.892 "seek_data": false, 00:07:08.892 "copy": false, 00:07:08.892 "nvme_iov_md": false 00:07:08.892 }, 00:07:08.892 "memory_domains": [ 00:07:08.892 { 00:07:08.892 "dma_device_id": "system", 00:07:08.892 "dma_device_type": 1 00:07:08.892 }, 00:07:08.892 { 00:07:08.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.892 "dma_device_type": 2 00:07:08.892 }, 00:07:08.892 { 00:07:08.892 "dma_device_id": "system", 00:07:08.892 "dma_device_type": 1 00:07:08.892 }, 00:07:08.892 { 00:07:08.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.892 "dma_device_type": 2 00:07:08.892 } 00:07:08.892 ], 00:07:08.892 "driver_specific": { 00:07:08.892 "raid": { 00:07:08.892 "uuid": "63e211a8-1deb-49a7-80de-fd9b3086703c", 00:07:08.892 "strip_size_kb": 64, 00:07:08.892 "state": "online", 00:07:08.892 "raid_level": "raid0", 00:07:08.892 "superblock": true, 00:07:08.892 "num_base_bdevs": 2, 00:07:08.892 "num_base_bdevs_discovered": 2, 00:07:08.892 "num_base_bdevs_operational": 2, 00:07:08.892 "base_bdevs_list": [ 00:07:08.892 { 00:07:08.892 "name": "pt1", 00:07:08.892 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.892 "is_configured": true, 00:07:08.892 "data_offset": 2048, 00:07:08.892 "data_size": 63488 00:07:08.892 }, 00:07:08.892 { 00:07:08.892 "name": "pt2", 00:07:08.892 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.892 "is_configured": true, 00:07:08.892 "data_offset": 2048, 00:07:08.892 "data_size": 63488 00:07:08.892 } 00:07:08.892 ] 00:07:08.892 } 00:07:08.892 } 00:07:08.892 }' 00:07:08.892 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:08.892 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:08.892 pt2' 00:07:08.892 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:08.892 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:08.892 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:09.150 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:09.150 "name": "pt1", 00:07:09.150 "aliases": [ 00:07:09.150 "00000000-0000-0000-0000-000000000001" 00:07:09.150 ], 00:07:09.150 "product_name": "passthru", 00:07:09.150 "block_size": 512, 00:07:09.150 "num_blocks": 65536, 00:07:09.150 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.150 "assigned_rate_limits": { 00:07:09.150 "rw_ios_per_sec": 0, 00:07:09.150 "rw_mbytes_per_sec": 0, 00:07:09.150 "r_mbytes_per_sec": 0, 00:07:09.150 "w_mbytes_per_sec": 0 00:07:09.150 }, 00:07:09.150 "claimed": true, 00:07:09.150 "claim_type": "exclusive_write", 00:07:09.150 "zoned": false, 00:07:09.150 "supported_io_types": { 00:07:09.150 "read": true, 00:07:09.150 "write": true, 00:07:09.150 "unmap": true, 00:07:09.150 "flush": true, 00:07:09.150 "reset": true, 00:07:09.150 "nvme_admin": false, 00:07:09.151 "nvme_io": false, 00:07:09.151 "nvme_io_md": false, 00:07:09.151 "write_zeroes": true, 00:07:09.151 "zcopy": true, 00:07:09.151 "get_zone_info": false, 00:07:09.151 "zone_management": false, 00:07:09.151 "zone_append": false, 00:07:09.151 "compare": false, 00:07:09.151 "compare_and_write": false, 00:07:09.151 "abort": true, 00:07:09.151 "seek_hole": false, 00:07:09.151 "seek_data": false, 00:07:09.151 "copy": true, 00:07:09.151 "nvme_iov_md": false 00:07:09.151 }, 00:07:09.151 "memory_domains": [ 00:07:09.151 { 00:07:09.151 "dma_device_id": "system", 00:07:09.151 "dma_device_type": 1 00:07:09.151 }, 00:07:09.151 { 00:07:09.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.151 "dma_device_type": 2 00:07:09.151 } 00:07:09.151 ], 00:07:09.151 "driver_specific": { 00:07:09.151 "passthru": { 00:07:09.151 "name": "pt1", 00:07:09.151 "base_bdev_name": "malloc1" 00:07:09.151 } 00:07:09.151 } 00:07:09.151 }' 00:07:09.151 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:09.151 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:09.151 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:09.151 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:09.151 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:09.151 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:09.151 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:09.151 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:09.410 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:09.410 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:09.410 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:09.410 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:09.410 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:09.410 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:09.410 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:09.669 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:09.669 "name": "pt2", 00:07:09.669 "aliases": [ 00:07:09.669 "00000000-0000-0000-0000-000000000002" 00:07:09.669 ], 00:07:09.669 "product_name": "passthru", 00:07:09.669 "block_size": 512, 00:07:09.669 "num_blocks": 65536, 00:07:09.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.669 "assigned_rate_limits": { 00:07:09.669 "rw_ios_per_sec": 0, 00:07:09.669 "rw_mbytes_per_sec": 0, 00:07:09.669 "r_mbytes_per_sec": 0, 00:07:09.669 "w_mbytes_per_sec": 0 00:07:09.669 }, 00:07:09.669 "claimed": true, 00:07:09.669 "claim_type": "exclusive_write", 00:07:09.669 "zoned": false, 00:07:09.669 "supported_io_types": { 00:07:09.669 "read": true, 00:07:09.669 "write": true, 00:07:09.669 "unmap": true, 00:07:09.669 "flush": true, 00:07:09.669 "reset": true, 00:07:09.669 "nvme_admin": false, 00:07:09.669 "nvme_io": false, 00:07:09.669 "nvme_io_md": false, 00:07:09.669 "write_zeroes": true, 00:07:09.669 "zcopy": true, 00:07:09.669 "get_zone_info": false, 00:07:09.669 "zone_management": false, 00:07:09.669 "zone_append": false, 00:07:09.669 "compare": false, 00:07:09.669 "compare_and_write": false, 00:07:09.669 "abort": true, 00:07:09.669 "seek_hole": false, 00:07:09.669 "seek_data": false, 00:07:09.669 "copy": true, 00:07:09.669 "nvme_iov_md": false 00:07:09.669 }, 00:07:09.669 "memory_domains": [ 00:07:09.669 { 00:07:09.669 "dma_device_id": "system", 00:07:09.669 "dma_device_type": 1 00:07:09.669 }, 00:07:09.669 { 00:07:09.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.669 "dma_device_type": 2 00:07:09.669 } 00:07:09.669 ], 00:07:09.669 "driver_specific": { 00:07:09.669 "passthru": { 00:07:09.669 "name": "pt2", 00:07:09.669 "base_bdev_name": "malloc2" 00:07:09.669 } 00:07:09.669 } 00:07:09.669 }' 00:07:09.669 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:09.669 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:09.669 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:09.669 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:09.669 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:09.669 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:09.669 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:09.928 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:09.928 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:09.928 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:09.928 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:09.928 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:09.928 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:09.928 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:07:10.187 [2024-08-14 06:38:37.263912] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.187 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=63e211a8-1deb-49a7-80de-fd9b3086703c 00:07:10.187 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 63e211a8-1deb-49a7-80de-fd9b3086703c ']' 00:07:10.187 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:10.446 [2024-08-14 06:38:37.455342] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:10.446 [2024-08-14 06:38:37.455466] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:10.446 [2024-08-14 06:38:37.455581] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.446 [2024-08-14 06:38:37.455643] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.446 [2024-08-14 06:38:37.455660] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:10.446 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:10.446 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:07:10.446 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:07:10.446 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:07:10.446 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:07:10.446 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:10.705 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:07:10.705 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:10.963 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:10.963 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:11.222 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:07:11.222 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:11.222 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:07:11.222 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:11.222 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.222 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:11.222 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.222 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:11.222 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.222 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:11.222 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.222 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:11.222 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:11.482 [2024-08-14 06:38:38.481738] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:11.482 [2024-08-14 06:38:38.483622] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:11.482 [2024-08-14 06:38:38.483699] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:11.482 [2024-08-14 06:38:38.483788] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:11.482 [2024-08-14 06:38:38.483807] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:11.482 [2024-08-14 06:38:38.483821] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:11.482 request: 00:07:11.482 { 00:07:11.482 "name": "raid_bdev1", 00:07:11.482 "raid_level": "raid0", 00:07:11.482 "base_bdevs": [ 00:07:11.482 "malloc1", 00:07:11.482 "malloc2" 00:07:11.482 ], 00:07:11.482 "strip_size_kb": 64, 00:07:11.482 "superblock": false, 00:07:11.482 "method": "bdev_raid_create", 00:07:11.482 "req_id": 1 00:07:11.482 } 00:07:11.482 Got JSON-RPC error response 00:07:11.482 response: 00:07:11.482 { 00:07:11.482 "code": -17, 00:07:11.482 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:11.482 } 00:07:11.482 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:07:11.482 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:11.482 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:07:11.482 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:11.482 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:11.482 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:07:11.482 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:07:11.482 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:07:11.482 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:11.742 [2024-08-14 06:38:38.877024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:11.742 [2024-08-14 06:38:38.877216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.742 [2024-08-14 06:38:38.877260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:11.742 [2024-08-14 06:38:38.877300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.742 [2024-08-14 06:38:38.879508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.742 [2024-08-14 06:38:38.879624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:11.742 [2024-08-14 06:38:38.879772] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:11.742 [2024-08-14 06:38:38.879878] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:11.742 pt1 00:07:11.742 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:11.742 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:11.742 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:11.742 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:11.742 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:11.742 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:11.742 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:11.742 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:11.742 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:11.742 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:11.742 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:11.742 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.002 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:12.002 "name": "raid_bdev1", 00:07:12.002 "uuid": "63e211a8-1deb-49a7-80de-fd9b3086703c", 00:07:12.002 "strip_size_kb": 64, 00:07:12.002 "state": "configuring", 00:07:12.002 "raid_level": "raid0", 00:07:12.002 "superblock": true, 00:07:12.002 "num_base_bdevs": 2, 00:07:12.002 "num_base_bdevs_discovered": 1, 00:07:12.002 "num_base_bdevs_operational": 2, 00:07:12.002 "base_bdevs_list": [ 00:07:12.002 { 00:07:12.002 "name": "pt1", 00:07:12.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:12.002 "is_configured": true, 00:07:12.002 "data_offset": 2048, 00:07:12.002 "data_size": 63488 00:07:12.002 }, 00:07:12.002 { 00:07:12.002 "name": null, 00:07:12.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:12.002 "is_configured": false, 00:07:12.002 "data_offset": 2048, 00:07:12.002 "data_size": 63488 00:07:12.002 } 00:07:12.002 ] 00:07:12.002 }' 00:07:12.002 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:12.002 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.570 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:07:12.570 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:07:12.570 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:07:12.570 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:12.570 [2024-08-14 06:38:39.820154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:12.570 [2024-08-14 06:38:39.820337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.570 [2024-08-14 06:38:39.820380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:12.571 [2024-08-14 06:38:39.820417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.571 [2024-08-14 06:38:39.820907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.571 [2024-08-14 06:38:39.820983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:12.571 [2024-08-14 06:38:39.821111] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:12.571 [2024-08-14 06:38:39.821196] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:12.571 [2024-08-14 06:38:39.821343] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:12.571 [2024-08-14 06:38:39.821389] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:12.571 [2024-08-14 06:38:39.821647] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:12.571 [2024-08-14 06:38:39.821764] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:12.571 [2024-08-14 06:38:39.821773] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:12.571 [2024-08-14 06:38:39.821879] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.832 pt2 00:07:12.832 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:07:12.832 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:07:12.832 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:12.832 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:12.832 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:12.832 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:12.832 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:12.832 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:12.832 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:12.832 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:12.832 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:12.832 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:12.832 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:12.832 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.832 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:12.832 "name": "raid_bdev1", 00:07:12.832 "uuid": "63e211a8-1deb-49a7-80de-fd9b3086703c", 00:07:12.832 "strip_size_kb": 64, 00:07:12.832 "state": "online", 00:07:12.832 "raid_level": "raid0", 00:07:12.832 "superblock": true, 00:07:12.832 "num_base_bdevs": 2, 00:07:12.832 "num_base_bdevs_discovered": 2, 00:07:12.832 "num_base_bdevs_operational": 2, 00:07:12.832 "base_bdevs_list": [ 00:07:12.832 { 00:07:12.832 "name": "pt1", 00:07:12.832 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:12.832 "is_configured": true, 00:07:12.832 "data_offset": 2048, 00:07:12.832 "data_size": 63488 00:07:12.832 }, 00:07:12.832 { 00:07:12.832 "name": "pt2", 00:07:12.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:12.832 "is_configured": true, 00:07:12.832 "data_offset": 2048, 00:07:12.832 "data_size": 63488 00:07:12.832 } 00:07:12.832 ] 00:07:12.832 }' 00:07:12.832 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:12.832 06:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.405 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:07:13.405 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:13.405 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:13.405 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:13.405 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:13.405 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:13.405 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:13.405 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:13.664 [2024-08-14 06:38:40.762821] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.664 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:13.664 "name": "raid_bdev1", 00:07:13.664 "aliases": [ 00:07:13.664 "63e211a8-1deb-49a7-80de-fd9b3086703c" 00:07:13.664 ], 00:07:13.664 "product_name": "Raid Volume", 00:07:13.664 "block_size": 512, 00:07:13.664 "num_blocks": 126976, 00:07:13.664 "uuid": "63e211a8-1deb-49a7-80de-fd9b3086703c", 00:07:13.664 "assigned_rate_limits": { 00:07:13.664 "rw_ios_per_sec": 0, 00:07:13.664 "rw_mbytes_per_sec": 0, 00:07:13.664 "r_mbytes_per_sec": 0, 00:07:13.664 "w_mbytes_per_sec": 0 00:07:13.664 }, 00:07:13.664 "claimed": false, 00:07:13.664 "zoned": false, 00:07:13.664 "supported_io_types": { 00:07:13.664 "read": true, 00:07:13.664 "write": true, 00:07:13.664 "unmap": true, 00:07:13.664 "flush": true, 00:07:13.664 "reset": true, 00:07:13.664 "nvme_admin": false, 00:07:13.664 "nvme_io": false, 00:07:13.664 "nvme_io_md": false, 00:07:13.664 "write_zeroes": true, 00:07:13.664 "zcopy": false, 00:07:13.664 "get_zone_info": false, 00:07:13.664 "zone_management": false, 00:07:13.664 "zone_append": false, 00:07:13.664 "compare": false, 00:07:13.664 "compare_and_write": false, 00:07:13.664 "abort": false, 00:07:13.664 "seek_hole": false, 00:07:13.664 "seek_data": false, 00:07:13.664 "copy": false, 00:07:13.664 "nvme_iov_md": false 00:07:13.664 }, 00:07:13.664 "memory_domains": [ 00:07:13.664 { 00:07:13.664 "dma_device_id": "system", 00:07:13.664 "dma_device_type": 1 00:07:13.664 }, 00:07:13.664 { 00:07:13.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.664 "dma_device_type": 2 00:07:13.664 }, 00:07:13.664 { 00:07:13.664 "dma_device_id": "system", 00:07:13.664 "dma_device_type": 1 00:07:13.664 }, 00:07:13.664 { 00:07:13.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.664 "dma_device_type": 2 00:07:13.664 } 00:07:13.664 ], 00:07:13.664 "driver_specific": { 00:07:13.664 "raid": { 00:07:13.664 "uuid": "63e211a8-1deb-49a7-80de-fd9b3086703c", 00:07:13.664 "strip_size_kb": 64, 00:07:13.664 "state": "online", 00:07:13.664 "raid_level": "raid0", 00:07:13.664 "superblock": true, 00:07:13.664 "num_base_bdevs": 2, 00:07:13.664 "num_base_bdevs_discovered": 2, 00:07:13.664 "num_base_bdevs_operational": 2, 00:07:13.664 "base_bdevs_list": [ 00:07:13.664 { 00:07:13.664 "name": "pt1", 00:07:13.664 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:13.664 "is_configured": true, 00:07:13.664 "data_offset": 2048, 00:07:13.664 "data_size": 63488 00:07:13.664 }, 00:07:13.664 { 00:07:13.664 "name": "pt2", 00:07:13.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:13.665 "is_configured": true, 00:07:13.665 "data_offset": 2048, 00:07:13.665 "data_size": 63488 00:07:13.665 } 00:07:13.665 ] 00:07:13.665 } 00:07:13.665 } 00:07:13.665 }' 00:07:13.665 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:13.665 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:13.665 pt2' 00:07:13.665 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:13.665 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:13.665 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:13.924 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:13.924 "name": "pt1", 00:07:13.924 "aliases": [ 00:07:13.924 "00000000-0000-0000-0000-000000000001" 00:07:13.924 ], 00:07:13.924 "product_name": "passthru", 00:07:13.924 "block_size": 512, 00:07:13.924 "num_blocks": 65536, 00:07:13.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:13.924 "assigned_rate_limits": { 00:07:13.924 "rw_ios_per_sec": 0, 00:07:13.924 "rw_mbytes_per_sec": 0, 00:07:13.925 "r_mbytes_per_sec": 0, 00:07:13.925 "w_mbytes_per_sec": 0 00:07:13.925 }, 00:07:13.925 "claimed": true, 00:07:13.925 "claim_type": "exclusive_write", 00:07:13.925 "zoned": false, 00:07:13.925 "supported_io_types": { 00:07:13.925 "read": true, 00:07:13.925 "write": true, 00:07:13.925 "unmap": true, 00:07:13.925 "flush": true, 00:07:13.925 "reset": true, 00:07:13.925 "nvme_admin": false, 00:07:13.925 "nvme_io": false, 00:07:13.925 "nvme_io_md": false, 00:07:13.925 "write_zeroes": true, 00:07:13.925 "zcopy": true, 00:07:13.925 "get_zone_info": false, 00:07:13.925 "zone_management": false, 00:07:13.925 "zone_append": false, 00:07:13.925 "compare": false, 00:07:13.925 "compare_and_write": false, 00:07:13.925 "abort": true, 00:07:13.925 "seek_hole": false, 00:07:13.925 "seek_data": false, 00:07:13.925 "copy": true, 00:07:13.925 "nvme_iov_md": false 00:07:13.925 }, 00:07:13.925 "memory_domains": [ 00:07:13.925 { 00:07:13.925 "dma_device_id": "system", 00:07:13.925 "dma_device_type": 1 00:07:13.925 }, 00:07:13.925 { 00:07:13.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.925 "dma_device_type": 2 00:07:13.925 } 00:07:13.925 ], 00:07:13.925 "driver_specific": { 00:07:13.925 "passthru": { 00:07:13.925 "name": "pt1", 00:07:13.925 "base_bdev_name": "malloc1" 00:07:13.925 } 00:07:13.925 } 00:07:13.925 }' 00:07:13.925 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:13.925 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:13.925 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:13.925 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:13.925 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:13.925 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:13.925 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:14.184 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:14.184 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:14.184 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:14.184 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:14.184 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:14.184 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:14.184 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:14.184 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:14.444 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:14.444 "name": "pt2", 00:07:14.444 "aliases": [ 00:07:14.444 "00000000-0000-0000-0000-000000000002" 00:07:14.444 ], 00:07:14.444 "product_name": "passthru", 00:07:14.444 "block_size": 512, 00:07:14.444 "num_blocks": 65536, 00:07:14.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:14.444 "assigned_rate_limits": { 00:07:14.444 "rw_ios_per_sec": 0, 00:07:14.444 "rw_mbytes_per_sec": 0, 00:07:14.444 "r_mbytes_per_sec": 0, 00:07:14.444 "w_mbytes_per_sec": 0 00:07:14.444 }, 00:07:14.444 "claimed": true, 00:07:14.444 "claim_type": "exclusive_write", 00:07:14.444 "zoned": false, 00:07:14.444 "supported_io_types": { 00:07:14.444 "read": true, 00:07:14.444 "write": true, 00:07:14.444 "unmap": true, 00:07:14.444 "flush": true, 00:07:14.444 "reset": true, 00:07:14.444 "nvme_admin": false, 00:07:14.444 "nvme_io": false, 00:07:14.444 "nvme_io_md": false, 00:07:14.444 "write_zeroes": true, 00:07:14.444 "zcopy": true, 00:07:14.444 "get_zone_info": false, 00:07:14.444 "zone_management": false, 00:07:14.444 "zone_append": false, 00:07:14.444 "compare": false, 00:07:14.444 "compare_and_write": false, 00:07:14.444 "abort": true, 00:07:14.444 "seek_hole": false, 00:07:14.444 "seek_data": false, 00:07:14.444 "copy": true, 00:07:14.444 "nvme_iov_md": false 00:07:14.444 }, 00:07:14.444 "memory_domains": [ 00:07:14.444 { 00:07:14.444 "dma_device_id": "system", 00:07:14.444 "dma_device_type": 1 00:07:14.444 }, 00:07:14.444 { 00:07:14.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.444 "dma_device_type": 2 00:07:14.444 } 00:07:14.444 ], 00:07:14.444 "driver_specific": { 00:07:14.444 "passthru": { 00:07:14.444 "name": "pt2", 00:07:14.444 "base_bdev_name": "malloc2" 00:07:14.444 } 00:07:14.444 } 00:07:14.444 }' 00:07:14.444 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:14.444 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:14.444 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:14.444 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:14.444 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:14.796 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:14.796 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:14.796 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:14.796 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:14.796 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:14.796 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:14.796 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:14.796 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:14.796 06:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:07:15.055 [2024-08-14 06:38:42.112717] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.055 06:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 63e211a8-1deb-49a7-80de-fd9b3086703c '!=' 63e211a8-1deb-49a7-80de-fd9b3086703c ']' 00:07:15.055 06:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:07:15.055 06:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:15.055 06:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:15.055 06:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 71753 00:07:15.055 06:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 71753 ']' 00:07:15.055 06:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 71753 00:07:15.055 06:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:07:15.055 06:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:15.055 06:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71753 00:07:15.055 06:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:15.055 06:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:15.055 06:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71753' 00:07:15.055 killing process with pid 71753 00:07:15.055 06:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 71753 00:07:15.055 [2024-08-14 06:38:42.181230] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:15.055 [2024-08-14 06:38:42.181417] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.055 [2024-08-14 06:38:42.181517] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 06:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 71753 00:07:15.055 ee all in destruct 00:07:15.055 [2024-08-14 06:38:42.181603] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:15.055 [2024-08-14 06:38:42.205762] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.315 06:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:07:15.315 00:07:15.315 real 0m9.533s 00:07:15.315 user 0m17.085s 00:07:15.315 sys 0m1.477s 00:07:15.315 06:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.315 06:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.315 ************************************ 00:07:15.315 END TEST raid_superblock_test 00:07:15.315 ************************************ 00:07:15.315 06:38:42 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:15.315 06:38:42 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:15.315 06:38:42 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.315 06:38:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.315 ************************************ 00:07:15.315 START TEST raid_read_error_test 00:07:15.315 ************************************ 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 2 read 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.jg8MIYdNaG 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=72086 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 72086 /var/tmp/spdk-raid.sock 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 72086 ']' 00:07:15.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.315 06:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.573 [2024-08-14 06:38:42.622563] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:07:15.573 [2024-08-14 06:38:42.622683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72086 ] 00:07:15.573 [2024-08-14 06:38:42.771232] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.573 [2024-08-14 06:38:42.822664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.853 [2024-08-14 06:38:42.868085] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.853 [2024-08-14 06:38:42.868143] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.421 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:16.421 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:07:16.421 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:16.421 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:16.680 BaseBdev1_malloc 00:07:16.680 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:16.680 true 00:07:16.680 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:16.939 [2024-08-14 06:38:44.073854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:16.939 [2024-08-14 06:38:44.073955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.939 [2024-08-14 06:38:44.073983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:16.939 [2024-08-14 06:38:44.074009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.939 [2024-08-14 06:38:44.076603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.939 BaseBdev1 00:07:16.939 [2024-08-14 06:38:44.076725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:16.939 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:16.939 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:17.198 BaseBdev2_malloc 00:07:17.198 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:17.458 true 00:07:17.458 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:17.458 [2024-08-14 06:38:44.669863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:17.458 [2024-08-14 06:38:44.670043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.458 [2024-08-14 06:38:44.670092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:17.458 [2024-08-14 06:38:44.670129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.458 [2024-08-14 06:38:44.672466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.458 [2024-08-14 06:38:44.672569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:17.458 BaseBdev2 00:07:17.458 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:17.717 [2024-08-14 06:38:44.889572] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.717 [2024-08-14 06:38:44.891730] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:17.717 [2024-08-14 06:38:44.892047] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:17.717 [2024-08-14 06:38:44.892117] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:17.717 [2024-08-14 06:38:44.892475] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:17.717 [2024-08-14 06:38:44.892683] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:17.717 [2024-08-14 06:38:44.892729] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:17.717 [2024-08-14 06:38:44.892968] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.717 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:17.717 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:17.717 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:17.717 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:17.717 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:17.717 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:17.717 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:17.717 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:17.717 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:17.717 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:17.717 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:17.717 06:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.976 06:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:17.976 "name": "raid_bdev1", 00:07:17.976 "uuid": "53b44838-3125-473b-9bf9-a7b9d7a727f8", 00:07:17.976 "strip_size_kb": 64, 00:07:17.976 "state": "online", 00:07:17.976 "raid_level": "raid0", 00:07:17.976 "superblock": true, 00:07:17.976 "num_base_bdevs": 2, 00:07:17.976 "num_base_bdevs_discovered": 2, 00:07:17.976 "num_base_bdevs_operational": 2, 00:07:17.976 "base_bdevs_list": [ 00:07:17.976 { 00:07:17.976 "name": "BaseBdev1", 00:07:17.976 "uuid": "33874a6d-a84c-5df7-8fc8-b06b9e317b8c", 00:07:17.976 "is_configured": true, 00:07:17.976 "data_offset": 2048, 00:07:17.976 "data_size": 63488 00:07:17.976 }, 00:07:17.976 { 00:07:17.976 "name": "BaseBdev2", 00:07:17.976 "uuid": "ea86d741-d32c-5520-b206-64d1910705fe", 00:07:17.976 "is_configured": true, 00:07:17.976 "data_offset": 2048, 00:07:17.976 "data_size": 63488 00:07:17.976 } 00:07:17.976 ] 00:07:17.976 }' 00:07:17.977 06:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:17.977 06:38:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.545 06:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:07:18.545 06:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:18.545 [2024-08-14 06:38:45.712623] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:19.482 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:19.742 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:07:19.742 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:19.742 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:07:19.742 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:19.742 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:19.742 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:19.742 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:19.742 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:19.742 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:19.742 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:19.742 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:19.742 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:19.742 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:19.742 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.742 06:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:20.001 06:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:20.001 "name": "raid_bdev1", 00:07:20.001 "uuid": "53b44838-3125-473b-9bf9-a7b9d7a727f8", 00:07:20.001 "strip_size_kb": 64, 00:07:20.001 "state": "online", 00:07:20.001 "raid_level": "raid0", 00:07:20.001 "superblock": true, 00:07:20.001 "num_base_bdevs": 2, 00:07:20.001 "num_base_bdevs_discovered": 2, 00:07:20.001 "num_base_bdevs_operational": 2, 00:07:20.001 "base_bdevs_list": [ 00:07:20.001 { 00:07:20.001 "name": "BaseBdev1", 00:07:20.001 "uuid": "33874a6d-a84c-5df7-8fc8-b06b9e317b8c", 00:07:20.001 "is_configured": true, 00:07:20.001 "data_offset": 2048, 00:07:20.001 "data_size": 63488 00:07:20.001 }, 00:07:20.001 { 00:07:20.001 "name": "BaseBdev2", 00:07:20.001 "uuid": "ea86d741-d32c-5520-b206-64d1910705fe", 00:07:20.001 "is_configured": true, 00:07:20.001 "data_offset": 2048, 00:07:20.001 "data_size": 63488 00:07:20.001 } 00:07:20.001 ] 00:07:20.001 }' 00:07:20.001 06:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:20.001 06:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.569 06:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:20.828 [2024-08-14 06:38:47.857773] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:20.828 [2024-08-14 06:38:47.857924] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.828 [2024-08-14 06:38:47.860580] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.828 [2024-08-14 06:38:47.860686] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.828 [2024-08-14 06:38:47.860746] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.828 [2024-08-14 06:38:47.860820] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:20.828 0 00:07:20.828 06:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 72086 00:07:20.828 06:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 72086 ']' 00:07:20.828 06:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 72086 00:07:20.828 06:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:07:20.828 06:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:20.828 06:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72086 00:07:20.828 06:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:20.828 06:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:20.828 06:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72086' 00:07:20.828 killing process with pid 72086 00:07:20.828 06:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 72086 00:07:20.828 06:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 72086 00:07:20.829 [2024-08-14 06:38:47.932221] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.829 [2024-08-14 06:38:47.948303] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.088 06:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.jg8MIYdNaG 00:07:21.088 06:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:07:21.088 06:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:07:21.088 06:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:07:21.088 06:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:07:21.088 06:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:21.088 06:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:21.088 06:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:07:21.088 00:07:21.088 real 0m5.667s 00:07:21.088 user 0m8.742s 00:07:21.088 sys 0m0.841s 00:07:21.088 06:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.088 06:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.088 ************************************ 00:07:21.088 END TEST raid_read_error_test 00:07:21.088 ************************************ 00:07:21.088 06:38:48 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:21.088 06:38:48 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:21.088 06:38:48 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.088 06:38:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.088 ************************************ 00:07:21.088 START TEST raid_write_error_test 00:07:21.088 ************************************ 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 2 write 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.OB3Or8laO1 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=72257 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 72257 /var/tmp/spdk-raid.sock 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 72257 ']' 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:21.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:21.088 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.347 [2024-08-14 06:38:48.357935] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:07:21.347 [2024-08-14 06:38:48.358131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72257 ] 00:07:21.347 [2024-08-14 06:38:48.486768] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.347 [2024-08-14 06:38:48.533667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.347 [2024-08-14 06:38:48.577501] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.347 [2024-08-14 06:38:48.577544] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.285 06:38:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:22.285 06:38:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:07:22.285 06:38:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:22.285 06:38:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:22.285 BaseBdev1_malloc 00:07:22.285 06:38:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:22.544 true 00:07:22.544 06:38:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:22.544 [2024-08-14 06:38:49.790274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:22.544 [2024-08-14 06:38:49.790471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.544 [2024-08-14 06:38:49.790511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:22.544 [2024-08-14 06:38:49.790539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.544 [2024-08-14 06:38:49.793041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.544 [2024-08-14 06:38:49.793103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:22.544 BaseBdev1 00:07:22.803 06:38:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:22.803 06:38:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:22.803 BaseBdev2_malloc 00:07:22.803 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:23.062 true 00:07:23.062 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:23.319 [2024-08-14 06:38:50.398406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:23.319 [2024-08-14 06:38:50.398497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.319 [2024-08-14 06:38:50.398524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:23.319 [2024-08-14 06:38:50.398538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.319 [2024-08-14 06:38:50.400931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.319 [2024-08-14 06:38:50.400981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:23.319 BaseBdev2 00:07:23.319 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:23.578 [2024-08-14 06:38:50.606161] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.578 [2024-08-14 06:38:50.608167] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.578 [2024-08-14 06:38:50.608425] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:23.578 [2024-08-14 06:38:50.608445] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:23.578 [2024-08-14 06:38:50.608773] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:23.578 [2024-08-14 06:38:50.608958] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:23.578 [2024-08-14 06:38:50.608969] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:23.578 [2024-08-14 06:38:50.609158] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.578 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:23.578 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:23.578 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:23.578 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:23.578 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:23.578 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:23.578 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:23.578 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:23.578 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:23.578 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:23.578 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:23.578 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.838 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:23.838 "name": "raid_bdev1", 00:07:23.838 "uuid": "d056a58c-4ef1-497f-a6c1-46df4010a789", 00:07:23.838 "strip_size_kb": 64, 00:07:23.838 "state": "online", 00:07:23.838 "raid_level": "raid0", 00:07:23.838 "superblock": true, 00:07:23.838 "num_base_bdevs": 2, 00:07:23.838 "num_base_bdevs_discovered": 2, 00:07:23.838 "num_base_bdevs_operational": 2, 00:07:23.838 "base_bdevs_list": [ 00:07:23.838 { 00:07:23.838 "name": "BaseBdev1", 00:07:23.838 "uuid": "799b5e83-03f0-531b-86c1-78317653275f", 00:07:23.838 "is_configured": true, 00:07:23.838 "data_offset": 2048, 00:07:23.838 "data_size": 63488 00:07:23.838 }, 00:07:23.838 { 00:07:23.838 "name": "BaseBdev2", 00:07:23.838 "uuid": "950195a4-6e97-55f7-bae1-2aa480e82e60", 00:07:23.838 "is_configured": true, 00:07:23.838 "data_offset": 2048, 00:07:23.838 "data_size": 63488 00:07:23.838 } 00:07:23.838 ] 00:07:23.838 }' 00:07:23.838 06:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:23.838 06:38:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.406 06:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:24.406 06:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:07:24.406 [2024-08-14 06:38:51.477051] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:25.343 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.608 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:25.608 "name": "raid_bdev1", 00:07:25.608 "uuid": "d056a58c-4ef1-497f-a6c1-46df4010a789", 00:07:25.608 "strip_size_kb": 64, 00:07:25.608 "state": "online", 00:07:25.608 "raid_level": "raid0", 00:07:25.608 "superblock": true, 00:07:25.608 "num_base_bdevs": 2, 00:07:25.608 "num_base_bdevs_discovered": 2, 00:07:25.608 "num_base_bdevs_operational": 2, 00:07:25.608 "base_bdevs_list": [ 00:07:25.608 { 00:07:25.608 "name": "BaseBdev1", 00:07:25.608 "uuid": "799b5e83-03f0-531b-86c1-78317653275f", 00:07:25.608 "is_configured": true, 00:07:25.608 "data_offset": 2048, 00:07:25.608 "data_size": 63488 00:07:25.608 }, 00:07:25.609 { 00:07:25.609 "name": "BaseBdev2", 00:07:25.609 "uuid": "950195a4-6e97-55f7-bae1-2aa480e82e60", 00:07:25.609 "is_configured": true, 00:07:25.609 "data_offset": 2048, 00:07:25.609 "data_size": 63488 00:07:25.609 } 00:07:25.609 ] 00:07:25.609 }' 00:07:25.609 06:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:25.609 06:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.185 06:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:26.444 [2024-08-14 06:38:53.573723] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:26.444 [2024-08-14 06:38:53.573783] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.444 [2024-08-14 06:38:53.576564] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.444 0 00:07:26.444 [2024-08-14 06:38:53.576720] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.444 [2024-08-14 06:38:53.576769] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:26.444 [2024-08-14 06:38:53.576794] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:26.444 06:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 72257 00:07:26.444 06:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 72257 ']' 00:07:26.444 06:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 72257 00:07:26.444 06:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:07:26.444 06:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:26.444 06:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72257 00:07:26.444 killing process with pid 72257 00:07:26.444 06:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:26.444 06:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:26.444 06:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72257' 00:07:26.444 06:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 72257 00:07:26.444 [2024-08-14 06:38:53.640153] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:26.444 06:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 72257 00:07:26.444 [2024-08-14 06:38:53.656532] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.704 06:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.OB3Or8laO1 00:07:26.704 06:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:07:26.704 06:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:07:26.704 06:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.48 00:07:26.704 06:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:07:26.704 06:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:26.704 06:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:26.704 ************************************ 00:07:26.704 END TEST raid_write_error_test 00:07:26.704 ************************************ 00:07:26.704 06:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.48 != \0\.\0\0 ]] 00:07:26.704 00:07:26.704 real 0m5.641s 00:07:26.704 user 0m8.741s 00:07:26.704 sys 0m0.780s 00:07:26.704 06:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.704 06:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.704 06:38:53 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:07:26.704 06:38:53 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:26.964 06:38:53 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:26.964 06:38:53 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.964 06:38:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.964 ************************************ 00:07:26.964 START TEST raid_state_function_test 00:07:26.964 ************************************ 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 false 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:07:26.964 Process raid pid: 72415 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=72415 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 72415' 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 72415 /var/tmp/spdk-raid.sock 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 72415 ']' 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:26.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:26.964 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.964 [2024-08-14 06:38:54.058493] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:07:26.965 [2024-08-14 06:38:54.058715] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.965 [2024-08-14 06:38:54.206064] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.224 [2024-08-14 06:38:54.254595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.224 [2024-08-14 06:38:54.299002] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.224 [2024-08-14 06:38:54.299130] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.793 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:27.793 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:07:27.793 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:28.051 [2024-08-14 06:38:55.071731] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:28.051 [2024-08-14 06:38:55.071897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:28.051 [2024-08-14 06:38:55.071930] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.051 [2024-08-14 06:38:55.071941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.051 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:28.051 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:28.051 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:28.051 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:28.051 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:28.051 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:28.051 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:28.051 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:28.051 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:28.051 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:28.051 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:28.051 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.052 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:28.052 "name": "Existed_Raid", 00:07:28.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.052 "strip_size_kb": 64, 00:07:28.052 "state": "configuring", 00:07:28.052 "raid_level": "concat", 00:07:28.052 "superblock": false, 00:07:28.052 "num_base_bdevs": 2, 00:07:28.052 "num_base_bdevs_discovered": 0, 00:07:28.052 "num_base_bdevs_operational": 2, 00:07:28.052 "base_bdevs_list": [ 00:07:28.052 { 00:07:28.052 "name": "BaseBdev1", 00:07:28.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.052 "is_configured": false, 00:07:28.052 "data_offset": 0, 00:07:28.052 "data_size": 0 00:07:28.052 }, 00:07:28.052 { 00:07:28.052 "name": "BaseBdev2", 00:07:28.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.052 "is_configured": false, 00:07:28.052 "data_offset": 0, 00:07:28.052 "data_size": 0 00:07:28.052 } 00:07:28.052 ] 00:07:28.052 }' 00:07:28.052 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:28.052 06:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.621 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:28.880 [2024-08-14 06:38:55.994102] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.880 [2024-08-14 06:38:55.994276] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:28.880 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:29.139 [2024-08-14 06:38:56.209749] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:29.139 [2024-08-14 06:38:56.209867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:29.139 [2024-08-14 06:38:56.209947] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.139 [2024-08-14 06:38:56.209985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.139 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:29.398 [2024-08-14 06:38:56.402706] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.398 BaseBdev1 00:07:29.398 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:29.398 06:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:07:29.398 06:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:07:29.398 06:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:07:29.398 06:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:07:29.398 06:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:07:29.398 06:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:29.398 06:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:29.658 [ 00:07:29.658 { 00:07:29.658 "name": "BaseBdev1", 00:07:29.658 "aliases": [ 00:07:29.658 "ea6709ea-1e20-451d-ac89-4f7c933e5755" 00:07:29.658 ], 00:07:29.658 "product_name": "Malloc disk", 00:07:29.658 "block_size": 512, 00:07:29.658 "num_blocks": 65536, 00:07:29.658 "uuid": "ea6709ea-1e20-451d-ac89-4f7c933e5755", 00:07:29.658 "assigned_rate_limits": { 00:07:29.658 "rw_ios_per_sec": 0, 00:07:29.658 "rw_mbytes_per_sec": 0, 00:07:29.658 "r_mbytes_per_sec": 0, 00:07:29.658 "w_mbytes_per_sec": 0 00:07:29.658 }, 00:07:29.658 "claimed": true, 00:07:29.658 "claim_type": "exclusive_write", 00:07:29.658 "zoned": false, 00:07:29.658 "supported_io_types": { 00:07:29.658 "read": true, 00:07:29.658 "write": true, 00:07:29.658 "unmap": true, 00:07:29.658 "flush": true, 00:07:29.658 "reset": true, 00:07:29.658 "nvme_admin": false, 00:07:29.658 "nvme_io": false, 00:07:29.658 "nvme_io_md": false, 00:07:29.658 "write_zeroes": true, 00:07:29.658 "zcopy": true, 00:07:29.658 "get_zone_info": false, 00:07:29.658 "zone_management": false, 00:07:29.658 "zone_append": false, 00:07:29.658 "compare": false, 00:07:29.658 "compare_and_write": false, 00:07:29.658 "abort": true, 00:07:29.658 "seek_hole": false, 00:07:29.658 "seek_data": false, 00:07:29.658 "copy": true, 00:07:29.658 "nvme_iov_md": false 00:07:29.658 }, 00:07:29.658 "memory_domains": [ 00:07:29.658 { 00:07:29.658 "dma_device_id": "system", 00:07:29.658 "dma_device_type": 1 00:07:29.658 }, 00:07:29.658 { 00:07:29.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.658 "dma_device_type": 2 00:07:29.658 } 00:07:29.658 ], 00:07:29.658 "driver_specific": {} 00:07:29.658 } 00:07:29.658 ] 00:07:29.658 06:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:07:29.658 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:29.658 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:29.658 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:29.658 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:29.658 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:29.658 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:29.658 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:29.658 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:29.658 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:29.658 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:29.658 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:29.658 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.917 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:29.917 "name": "Existed_Raid", 00:07:29.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.917 "strip_size_kb": 64, 00:07:29.917 "state": "configuring", 00:07:29.917 "raid_level": "concat", 00:07:29.917 "superblock": false, 00:07:29.917 "num_base_bdevs": 2, 00:07:29.917 "num_base_bdevs_discovered": 1, 00:07:29.917 "num_base_bdevs_operational": 2, 00:07:29.917 "base_bdevs_list": [ 00:07:29.917 { 00:07:29.917 "name": "BaseBdev1", 00:07:29.917 "uuid": "ea6709ea-1e20-451d-ac89-4f7c933e5755", 00:07:29.917 "is_configured": true, 00:07:29.917 "data_offset": 0, 00:07:29.917 "data_size": 65536 00:07:29.917 }, 00:07:29.917 { 00:07:29.917 "name": "BaseBdev2", 00:07:29.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.917 "is_configured": false, 00:07:29.917 "data_offset": 0, 00:07:29.917 "data_size": 0 00:07:29.917 } 00:07:29.917 ] 00:07:29.917 }' 00:07:29.917 06:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:29.917 06:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.486 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:30.486 [2024-08-14 06:38:57.680618] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:30.486 [2024-08-14 06:38:57.680689] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:30.486 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:30.745 [2024-08-14 06:38:57.884387] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:30.746 [2024-08-14 06:38:57.886344] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:30.746 [2024-08-14 06:38:57.886394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:30.746 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:30.746 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:30.746 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:30.746 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:30.746 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:30.746 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:30.746 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:30.746 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:30.746 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:30.746 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:30.746 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:30.746 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:30.746 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:30.746 06:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.005 06:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:31.005 "name": "Existed_Raid", 00:07:31.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.005 "strip_size_kb": 64, 00:07:31.005 "state": "configuring", 00:07:31.005 "raid_level": "concat", 00:07:31.005 "superblock": false, 00:07:31.005 "num_base_bdevs": 2, 00:07:31.005 "num_base_bdevs_discovered": 1, 00:07:31.005 "num_base_bdevs_operational": 2, 00:07:31.005 "base_bdevs_list": [ 00:07:31.005 { 00:07:31.005 "name": "BaseBdev1", 00:07:31.005 "uuid": "ea6709ea-1e20-451d-ac89-4f7c933e5755", 00:07:31.005 "is_configured": true, 00:07:31.005 "data_offset": 0, 00:07:31.005 "data_size": 65536 00:07:31.005 }, 00:07:31.005 { 00:07:31.005 "name": "BaseBdev2", 00:07:31.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.005 "is_configured": false, 00:07:31.005 "data_offset": 0, 00:07:31.005 "data_size": 0 00:07:31.005 } 00:07:31.005 ] 00:07:31.005 }' 00:07:31.005 06:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:31.005 06:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.574 06:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:31.833 [2024-08-14 06:38:58.872962] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:31.833 [2024-08-14 06:38:58.873123] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:31.833 [2024-08-14 06:38:58.873183] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:31.833 [2024-08-14 06:38:58.873584] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:31.833 [2024-08-14 06:38:58.873822] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:31.833 [2024-08-14 06:38:58.873881] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:31.833 [2024-08-14 06:38:58.874231] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.833 BaseBdev2 00:07:31.833 06:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:31.833 06:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:07:31.833 06:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:07:31.833 06:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:07:31.833 06:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:07:31.833 06:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:07:31.833 06:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:32.093 [ 00:07:32.093 { 00:07:32.093 "name": "BaseBdev2", 00:07:32.093 "aliases": [ 00:07:32.093 "847bb069-48f8-491d-80dc-395c4d4bbc8a" 00:07:32.093 ], 00:07:32.093 "product_name": "Malloc disk", 00:07:32.093 "block_size": 512, 00:07:32.093 "num_blocks": 65536, 00:07:32.093 "uuid": "847bb069-48f8-491d-80dc-395c4d4bbc8a", 00:07:32.093 "assigned_rate_limits": { 00:07:32.093 "rw_ios_per_sec": 0, 00:07:32.093 "rw_mbytes_per_sec": 0, 00:07:32.093 "r_mbytes_per_sec": 0, 00:07:32.093 "w_mbytes_per_sec": 0 00:07:32.093 }, 00:07:32.093 "claimed": true, 00:07:32.093 "claim_type": "exclusive_write", 00:07:32.093 "zoned": false, 00:07:32.093 "supported_io_types": { 00:07:32.093 "read": true, 00:07:32.093 "write": true, 00:07:32.093 "unmap": true, 00:07:32.093 "flush": true, 00:07:32.093 "reset": true, 00:07:32.093 "nvme_admin": false, 00:07:32.093 "nvme_io": false, 00:07:32.093 "nvme_io_md": false, 00:07:32.093 "write_zeroes": true, 00:07:32.093 "zcopy": true, 00:07:32.093 "get_zone_info": false, 00:07:32.093 "zone_management": false, 00:07:32.093 "zone_append": false, 00:07:32.093 "compare": false, 00:07:32.093 "compare_and_write": false, 00:07:32.093 "abort": true, 00:07:32.093 "seek_hole": false, 00:07:32.093 "seek_data": false, 00:07:32.093 "copy": true, 00:07:32.093 "nvme_iov_md": false 00:07:32.093 }, 00:07:32.093 "memory_domains": [ 00:07:32.093 { 00:07:32.093 "dma_device_id": "system", 00:07:32.093 "dma_device_type": 1 00:07:32.093 }, 00:07:32.093 { 00:07:32.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.093 "dma_device_type": 2 00:07:32.093 } 00:07:32.093 ], 00:07:32.093 "driver_specific": {} 00:07:32.093 } 00:07:32.093 ] 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:32.093 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.352 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:32.352 "name": "Existed_Raid", 00:07:32.352 "uuid": "066af776-b5e8-4efc-a4f4-86e40fa569fd", 00:07:32.352 "strip_size_kb": 64, 00:07:32.352 "state": "online", 00:07:32.352 "raid_level": "concat", 00:07:32.352 "superblock": false, 00:07:32.352 "num_base_bdevs": 2, 00:07:32.352 "num_base_bdevs_discovered": 2, 00:07:32.352 "num_base_bdevs_operational": 2, 00:07:32.352 "base_bdevs_list": [ 00:07:32.352 { 00:07:32.352 "name": "BaseBdev1", 00:07:32.353 "uuid": "ea6709ea-1e20-451d-ac89-4f7c933e5755", 00:07:32.353 "is_configured": true, 00:07:32.353 "data_offset": 0, 00:07:32.353 "data_size": 65536 00:07:32.353 }, 00:07:32.353 { 00:07:32.353 "name": "BaseBdev2", 00:07:32.353 "uuid": "847bb069-48f8-491d-80dc-395c4d4bbc8a", 00:07:32.353 "is_configured": true, 00:07:32.353 "data_offset": 0, 00:07:32.353 "data_size": 65536 00:07:32.353 } 00:07:32.353 ] 00:07:32.353 }' 00:07:32.353 06:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:32.353 06:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.921 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:32.921 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:32.921 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:32.921 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:32.921 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:32.921 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:32.921 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:32.921 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:33.180 [2024-08-14 06:39:00.223053] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.180 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:33.180 "name": "Existed_Raid", 00:07:33.180 "aliases": [ 00:07:33.180 "066af776-b5e8-4efc-a4f4-86e40fa569fd" 00:07:33.180 ], 00:07:33.180 "product_name": "Raid Volume", 00:07:33.180 "block_size": 512, 00:07:33.180 "num_blocks": 131072, 00:07:33.180 "uuid": "066af776-b5e8-4efc-a4f4-86e40fa569fd", 00:07:33.180 "assigned_rate_limits": { 00:07:33.180 "rw_ios_per_sec": 0, 00:07:33.180 "rw_mbytes_per_sec": 0, 00:07:33.180 "r_mbytes_per_sec": 0, 00:07:33.180 "w_mbytes_per_sec": 0 00:07:33.180 }, 00:07:33.180 "claimed": false, 00:07:33.180 "zoned": false, 00:07:33.180 "supported_io_types": { 00:07:33.180 "read": true, 00:07:33.180 "write": true, 00:07:33.180 "unmap": true, 00:07:33.180 "flush": true, 00:07:33.180 "reset": true, 00:07:33.180 "nvme_admin": false, 00:07:33.180 "nvme_io": false, 00:07:33.180 "nvme_io_md": false, 00:07:33.180 "write_zeroes": true, 00:07:33.180 "zcopy": false, 00:07:33.180 "get_zone_info": false, 00:07:33.180 "zone_management": false, 00:07:33.180 "zone_append": false, 00:07:33.180 "compare": false, 00:07:33.180 "compare_and_write": false, 00:07:33.180 "abort": false, 00:07:33.180 "seek_hole": false, 00:07:33.180 "seek_data": false, 00:07:33.180 "copy": false, 00:07:33.180 "nvme_iov_md": false 00:07:33.180 }, 00:07:33.180 "memory_domains": [ 00:07:33.180 { 00:07:33.180 "dma_device_id": "system", 00:07:33.180 "dma_device_type": 1 00:07:33.180 }, 00:07:33.180 { 00:07:33.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.180 "dma_device_type": 2 00:07:33.180 }, 00:07:33.180 { 00:07:33.180 "dma_device_id": "system", 00:07:33.180 "dma_device_type": 1 00:07:33.180 }, 00:07:33.180 { 00:07:33.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.180 "dma_device_type": 2 00:07:33.180 } 00:07:33.180 ], 00:07:33.180 "driver_specific": { 00:07:33.180 "raid": { 00:07:33.180 "uuid": "066af776-b5e8-4efc-a4f4-86e40fa569fd", 00:07:33.180 "strip_size_kb": 64, 00:07:33.180 "state": "online", 00:07:33.180 "raid_level": "concat", 00:07:33.180 "superblock": false, 00:07:33.180 "num_base_bdevs": 2, 00:07:33.180 "num_base_bdevs_discovered": 2, 00:07:33.180 "num_base_bdevs_operational": 2, 00:07:33.180 "base_bdevs_list": [ 00:07:33.180 { 00:07:33.180 "name": "BaseBdev1", 00:07:33.180 "uuid": "ea6709ea-1e20-451d-ac89-4f7c933e5755", 00:07:33.180 "is_configured": true, 00:07:33.180 "data_offset": 0, 00:07:33.180 "data_size": 65536 00:07:33.180 }, 00:07:33.180 { 00:07:33.180 "name": "BaseBdev2", 00:07:33.180 "uuid": "847bb069-48f8-491d-80dc-395c4d4bbc8a", 00:07:33.180 "is_configured": true, 00:07:33.180 "data_offset": 0, 00:07:33.180 "data_size": 65536 00:07:33.180 } 00:07:33.180 ] 00:07:33.180 } 00:07:33.180 } 00:07:33.180 }' 00:07:33.180 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:33.180 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:33.180 BaseBdev2' 00:07:33.180 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:33.180 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:33.180 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:33.440 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:33.440 "name": "BaseBdev1", 00:07:33.440 "aliases": [ 00:07:33.440 "ea6709ea-1e20-451d-ac89-4f7c933e5755" 00:07:33.440 ], 00:07:33.440 "product_name": "Malloc disk", 00:07:33.440 "block_size": 512, 00:07:33.440 "num_blocks": 65536, 00:07:33.440 "uuid": "ea6709ea-1e20-451d-ac89-4f7c933e5755", 00:07:33.440 "assigned_rate_limits": { 00:07:33.440 "rw_ios_per_sec": 0, 00:07:33.440 "rw_mbytes_per_sec": 0, 00:07:33.440 "r_mbytes_per_sec": 0, 00:07:33.440 "w_mbytes_per_sec": 0 00:07:33.440 }, 00:07:33.440 "claimed": true, 00:07:33.440 "claim_type": "exclusive_write", 00:07:33.440 "zoned": false, 00:07:33.440 "supported_io_types": { 00:07:33.440 "read": true, 00:07:33.440 "write": true, 00:07:33.440 "unmap": true, 00:07:33.440 "flush": true, 00:07:33.440 "reset": true, 00:07:33.440 "nvme_admin": false, 00:07:33.440 "nvme_io": false, 00:07:33.440 "nvme_io_md": false, 00:07:33.440 "write_zeroes": true, 00:07:33.440 "zcopy": true, 00:07:33.440 "get_zone_info": false, 00:07:33.440 "zone_management": false, 00:07:33.440 "zone_append": false, 00:07:33.440 "compare": false, 00:07:33.440 "compare_and_write": false, 00:07:33.440 "abort": true, 00:07:33.440 "seek_hole": false, 00:07:33.440 "seek_data": false, 00:07:33.440 "copy": true, 00:07:33.440 "nvme_iov_md": false 00:07:33.440 }, 00:07:33.440 "memory_domains": [ 00:07:33.440 { 00:07:33.440 "dma_device_id": "system", 00:07:33.440 "dma_device_type": 1 00:07:33.440 }, 00:07:33.440 { 00:07:33.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.440 "dma_device_type": 2 00:07:33.440 } 00:07:33.440 ], 00:07:33.440 "driver_specific": {} 00:07:33.440 }' 00:07:33.440 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:33.440 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:33.440 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:33.440 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:33.440 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:33.440 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:33.440 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:33.699 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:33.699 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:33.699 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:33.699 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:33.699 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:33.699 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:33.699 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:33.699 06:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:33.966 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:33.966 "name": "BaseBdev2", 00:07:33.966 "aliases": [ 00:07:33.966 "847bb069-48f8-491d-80dc-395c4d4bbc8a" 00:07:33.966 ], 00:07:33.966 "product_name": "Malloc disk", 00:07:33.966 "block_size": 512, 00:07:33.966 "num_blocks": 65536, 00:07:33.966 "uuid": "847bb069-48f8-491d-80dc-395c4d4bbc8a", 00:07:33.966 "assigned_rate_limits": { 00:07:33.966 "rw_ios_per_sec": 0, 00:07:33.966 "rw_mbytes_per_sec": 0, 00:07:33.966 "r_mbytes_per_sec": 0, 00:07:33.966 "w_mbytes_per_sec": 0 00:07:33.966 }, 00:07:33.966 "claimed": true, 00:07:33.966 "claim_type": "exclusive_write", 00:07:33.966 "zoned": false, 00:07:33.966 "supported_io_types": { 00:07:33.966 "read": true, 00:07:33.966 "write": true, 00:07:33.966 "unmap": true, 00:07:33.966 "flush": true, 00:07:33.966 "reset": true, 00:07:33.966 "nvme_admin": false, 00:07:33.966 "nvme_io": false, 00:07:33.966 "nvme_io_md": false, 00:07:33.966 "write_zeroes": true, 00:07:33.966 "zcopy": true, 00:07:33.966 "get_zone_info": false, 00:07:33.966 "zone_management": false, 00:07:33.966 "zone_append": false, 00:07:33.966 "compare": false, 00:07:33.966 "compare_and_write": false, 00:07:33.966 "abort": true, 00:07:33.966 "seek_hole": false, 00:07:33.966 "seek_data": false, 00:07:33.966 "copy": true, 00:07:33.966 "nvme_iov_md": false 00:07:33.966 }, 00:07:33.966 "memory_domains": [ 00:07:33.966 { 00:07:33.966 "dma_device_id": "system", 00:07:33.966 "dma_device_type": 1 00:07:33.966 }, 00:07:33.966 { 00:07:33.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.966 "dma_device_type": 2 00:07:33.966 } 00:07:33.966 ], 00:07:33.966 "driver_specific": {} 00:07:33.966 }' 00:07:33.966 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:33.966 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:33.966 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:33.966 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:33.966 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:33.966 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:33.966 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:34.226 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:34.226 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:34.226 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:34.226 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:34.226 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:34.226 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:34.486 [2024-08-14 06:39:01.556634] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:34.486 [2024-08-14 06:39:01.556764] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.486 [2024-08-14 06:39:01.556843] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:34.486 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.745 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:34.745 "name": "Existed_Raid", 00:07:34.745 "uuid": "066af776-b5e8-4efc-a4f4-86e40fa569fd", 00:07:34.745 "strip_size_kb": 64, 00:07:34.745 "state": "offline", 00:07:34.745 "raid_level": "concat", 00:07:34.745 "superblock": false, 00:07:34.745 "num_base_bdevs": 2, 00:07:34.745 "num_base_bdevs_discovered": 1, 00:07:34.745 "num_base_bdevs_operational": 1, 00:07:34.745 "base_bdevs_list": [ 00:07:34.745 { 00:07:34.745 "name": null, 00:07:34.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.745 "is_configured": false, 00:07:34.745 "data_offset": 0, 00:07:34.745 "data_size": 65536 00:07:34.745 }, 00:07:34.745 { 00:07:34.745 "name": "BaseBdev2", 00:07:34.745 "uuid": "847bb069-48f8-491d-80dc-395c4d4bbc8a", 00:07:34.745 "is_configured": true, 00:07:34.745 "data_offset": 0, 00:07:34.745 "data_size": 65536 00:07:34.746 } 00:07:34.746 ] 00:07:34.746 }' 00:07:34.746 06:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:34.746 06:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.315 06:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:35.315 06:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:35.315 06:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:35.315 06:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:35.315 06:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:35.315 06:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:35.315 06:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:35.574 [2024-08-14 06:39:02.726391] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:35.574 [2024-08-14 06:39:02.726567] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:35.574 06:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:35.574 06:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:35.575 06:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:35.575 06:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:35.834 06:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:35.834 06:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:35.834 06:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:35.834 06:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 72415 00:07:35.834 06:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 72415 ']' 00:07:35.834 06:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 72415 00:07:35.834 06:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:07:35.834 06:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:35.834 06:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72415 00:07:35.834 killing process with pid 72415 00:07:35.834 06:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:35.834 06:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:35.834 06:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72415' 00:07:35.834 06:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 72415 00:07:35.834 [2024-08-14 06:39:03.026631] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.834 06:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 72415 00:07:35.834 [2024-08-14 06:39:03.027712] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.094 06:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:07:36.094 00:07:36.094 real 0m9.304s 00:07:36.094 user 0m16.637s 00:07:36.094 sys 0m1.429s 00:07:36.094 06:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.094 06:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.094 ************************************ 00:07:36.094 END TEST raid_state_function_test 00:07:36.094 ************************************ 00:07:36.094 06:39:03 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:36.094 06:39:03 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:36.094 06:39:03 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.094 06:39:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.094 ************************************ 00:07:36.094 START TEST raid_state_function_test_sb 00:07:36.094 ************************************ 00:07:36.094 06:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 true 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=72755 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 72755' 00:07:36.354 Process raid pid: 72755 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 72755 /var/tmp/spdk-raid.sock 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 72755 ']' 00:07:36.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:36.354 06:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.354 [2024-08-14 06:39:03.436332] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:07:36.354 [2024-08-14 06:39:03.436489] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.354 [2024-08-14 06:39:03.586896] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.614 [2024-08-14 06:39:03.639110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.614 [2024-08-14 06:39:03.684195] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.614 [2024-08-14 06:39:03.684244] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.182 06:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:37.182 06:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:07:37.182 06:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:37.442 [2024-08-14 06:39:04.477291] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:37.442 [2024-08-14 06:39:04.477423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:37.442 [2024-08-14 06:39:04.477485] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.442 [2024-08-14 06:39:04.477516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.442 06:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:37.442 06:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:37.442 06:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:37.442 06:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:37.442 06:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:37.442 06:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:37.442 06:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:37.442 06:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:37.442 06:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:37.442 06:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:37.442 06:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:37.442 06:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.701 06:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:37.701 "name": "Existed_Raid", 00:07:37.701 "uuid": "d3cca850-696e-400f-8a58-fc2de7bed66c", 00:07:37.701 "strip_size_kb": 64, 00:07:37.701 "state": "configuring", 00:07:37.701 "raid_level": "concat", 00:07:37.701 "superblock": true, 00:07:37.701 "num_base_bdevs": 2, 00:07:37.701 "num_base_bdevs_discovered": 0, 00:07:37.701 "num_base_bdevs_operational": 2, 00:07:37.701 "base_bdevs_list": [ 00:07:37.701 { 00:07:37.701 "name": "BaseBdev1", 00:07:37.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.701 "is_configured": false, 00:07:37.701 "data_offset": 0, 00:07:37.701 "data_size": 0 00:07:37.701 }, 00:07:37.701 { 00:07:37.701 "name": "BaseBdev2", 00:07:37.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.701 "is_configured": false, 00:07:37.701 "data_offset": 0, 00:07:37.701 "data_size": 0 00:07:37.701 } 00:07:37.701 ] 00:07:37.701 }' 00:07:37.701 06:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:37.701 06:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.269 06:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:38.269 [2024-08-14 06:39:05.499591] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:38.269 [2024-08-14 06:39:05.499709] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:38.269 06:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:38.529 [2024-08-14 06:39:05.707329] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:38.529 [2024-08-14 06:39:05.707479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:38.529 [2024-08-14 06:39:05.707538] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.529 [2024-08-14 06:39:05.707573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.529 06:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:38.789 [2024-08-14 06:39:05.896309] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.789 BaseBdev1 00:07:38.789 06:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:38.789 06:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:07:38.789 06:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:07:38.789 06:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:07:38.789 06:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:07:38.789 06:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:07:38.789 06:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:39.047 06:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:39.307 [ 00:07:39.307 { 00:07:39.307 "name": "BaseBdev1", 00:07:39.307 "aliases": [ 00:07:39.307 "431526a5-02a7-45b0-b6db-7b18126cdd73" 00:07:39.307 ], 00:07:39.307 "product_name": "Malloc disk", 00:07:39.307 "block_size": 512, 00:07:39.307 "num_blocks": 65536, 00:07:39.307 "uuid": "431526a5-02a7-45b0-b6db-7b18126cdd73", 00:07:39.307 "assigned_rate_limits": { 00:07:39.307 "rw_ios_per_sec": 0, 00:07:39.307 "rw_mbytes_per_sec": 0, 00:07:39.307 "r_mbytes_per_sec": 0, 00:07:39.307 "w_mbytes_per_sec": 0 00:07:39.307 }, 00:07:39.307 "claimed": true, 00:07:39.307 "claim_type": "exclusive_write", 00:07:39.307 "zoned": false, 00:07:39.307 "supported_io_types": { 00:07:39.307 "read": true, 00:07:39.307 "write": true, 00:07:39.307 "unmap": true, 00:07:39.307 "flush": true, 00:07:39.307 "reset": true, 00:07:39.307 "nvme_admin": false, 00:07:39.307 "nvme_io": false, 00:07:39.307 "nvme_io_md": false, 00:07:39.307 "write_zeroes": true, 00:07:39.307 "zcopy": true, 00:07:39.307 "get_zone_info": false, 00:07:39.307 "zone_management": false, 00:07:39.307 "zone_append": false, 00:07:39.307 "compare": false, 00:07:39.307 "compare_and_write": false, 00:07:39.307 "abort": true, 00:07:39.307 "seek_hole": false, 00:07:39.307 "seek_data": false, 00:07:39.307 "copy": true, 00:07:39.307 "nvme_iov_md": false 00:07:39.307 }, 00:07:39.307 "memory_domains": [ 00:07:39.307 { 00:07:39.307 "dma_device_id": "system", 00:07:39.307 "dma_device_type": 1 00:07:39.307 }, 00:07:39.307 { 00:07:39.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.307 "dma_device_type": 2 00:07:39.307 } 00:07:39.307 ], 00:07:39.307 "driver_specific": {} 00:07:39.307 } 00:07:39.307 ] 00:07:39.307 06:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:07:39.307 06:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:39.307 06:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:39.307 06:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:39.307 06:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:39.307 06:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:39.307 06:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:39.307 06:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:39.308 06:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:39.308 06:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:39.308 06:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:39.308 06:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:39.308 06:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.308 06:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:39.308 "name": "Existed_Raid", 00:07:39.308 "uuid": "f1495c0b-eb84-4099-8623-0fbe16ae40a7", 00:07:39.308 "strip_size_kb": 64, 00:07:39.308 "state": "configuring", 00:07:39.308 "raid_level": "concat", 00:07:39.308 "superblock": true, 00:07:39.308 "num_base_bdevs": 2, 00:07:39.308 "num_base_bdevs_discovered": 1, 00:07:39.308 "num_base_bdevs_operational": 2, 00:07:39.308 "base_bdevs_list": [ 00:07:39.308 { 00:07:39.308 "name": "BaseBdev1", 00:07:39.308 "uuid": "431526a5-02a7-45b0-b6db-7b18126cdd73", 00:07:39.308 "is_configured": true, 00:07:39.308 "data_offset": 2048, 00:07:39.308 "data_size": 63488 00:07:39.308 }, 00:07:39.308 { 00:07:39.308 "name": "BaseBdev2", 00:07:39.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.308 "is_configured": false, 00:07:39.308 "data_offset": 0, 00:07:39.308 "data_size": 0 00:07:39.308 } 00:07:39.308 ] 00:07:39.308 }' 00:07:39.308 06:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:39.308 06:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.876 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:40.135 [2024-08-14 06:39:07.315471] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:40.135 [2024-08-14 06:39:07.315652] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:40.135 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:40.393 [2024-08-14 06:39:07.535197] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.393 [2024-08-14 06:39:07.537339] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:40.393 [2024-08-14 06:39:07.537439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:40.393 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:40.393 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:40.393 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:40.393 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:40.393 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:40.393 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:40.393 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:40.393 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:40.393 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:40.393 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:40.393 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:40.393 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:40.393 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:40.393 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.651 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:40.651 "name": "Existed_Raid", 00:07:40.651 "uuid": "519ee1d8-d626-4fb6-a9a8-cb7bbdf40153", 00:07:40.651 "strip_size_kb": 64, 00:07:40.651 "state": "configuring", 00:07:40.651 "raid_level": "concat", 00:07:40.651 "superblock": true, 00:07:40.651 "num_base_bdevs": 2, 00:07:40.651 "num_base_bdevs_discovered": 1, 00:07:40.651 "num_base_bdevs_operational": 2, 00:07:40.651 "base_bdevs_list": [ 00:07:40.651 { 00:07:40.651 "name": "BaseBdev1", 00:07:40.651 "uuid": "431526a5-02a7-45b0-b6db-7b18126cdd73", 00:07:40.651 "is_configured": true, 00:07:40.651 "data_offset": 2048, 00:07:40.651 "data_size": 63488 00:07:40.651 }, 00:07:40.651 { 00:07:40.651 "name": "BaseBdev2", 00:07:40.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.651 "is_configured": false, 00:07:40.651 "data_offset": 0, 00:07:40.651 "data_size": 0 00:07:40.651 } 00:07:40.651 ] 00:07:40.651 }' 00:07:40.651 06:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:40.651 06:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.218 06:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:41.487 [2024-08-14 06:39:08.629132] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:41.487 [2024-08-14 06:39:08.629509] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:41.487 [2024-08-14 06:39:08.629540] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:41.487 [2024-08-14 06:39:08.629965] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:41.487 [2024-08-14 06:39:08.630322] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:41.487 [2024-08-14 06:39:08.630403] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:41.487 [2024-08-14 06:39:08.630683] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.487 BaseBdev2 00:07:41.487 06:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:41.487 06:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:07:41.487 06:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:07:41.487 06:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:07:41.487 06:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:07:41.487 06:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:07:41.487 06:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:41.762 06:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:42.026 [ 00:07:42.026 { 00:07:42.026 "name": "BaseBdev2", 00:07:42.026 "aliases": [ 00:07:42.026 "4841de91-11a2-4224-8ead-192402a3c949" 00:07:42.026 ], 00:07:42.026 "product_name": "Malloc disk", 00:07:42.026 "block_size": 512, 00:07:42.026 "num_blocks": 65536, 00:07:42.026 "uuid": "4841de91-11a2-4224-8ead-192402a3c949", 00:07:42.026 "assigned_rate_limits": { 00:07:42.026 "rw_ios_per_sec": 0, 00:07:42.026 "rw_mbytes_per_sec": 0, 00:07:42.026 "r_mbytes_per_sec": 0, 00:07:42.026 "w_mbytes_per_sec": 0 00:07:42.026 }, 00:07:42.026 "claimed": true, 00:07:42.026 "claim_type": "exclusive_write", 00:07:42.026 "zoned": false, 00:07:42.026 "supported_io_types": { 00:07:42.026 "read": true, 00:07:42.026 "write": true, 00:07:42.026 "unmap": true, 00:07:42.026 "flush": true, 00:07:42.026 "reset": true, 00:07:42.026 "nvme_admin": false, 00:07:42.026 "nvme_io": false, 00:07:42.026 "nvme_io_md": false, 00:07:42.026 "write_zeroes": true, 00:07:42.026 "zcopy": true, 00:07:42.026 "get_zone_info": false, 00:07:42.026 "zone_management": false, 00:07:42.026 "zone_append": false, 00:07:42.026 "compare": false, 00:07:42.026 "compare_and_write": false, 00:07:42.026 "abort": true, 00:07:42.026 "seek_hole": false, 00:07:42.026 "seek_data": false, 00:07:42.026 "copy": true, 00:07:42.026 "nvme_iov_md": false 00:07:42.026 }, 00:07:42.026 "memory_domains": [ 00:07:42.026 { 00:07:42.026 "dma_device_id": "system", 00:07:42.026 "dma_device_type": 1 00:07:42.026 }, 00:07:42.026 { 00:07:42.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.026 "dma_device_type": 2 00:07:42.026 } 00:07:42.026 ], 00:07:42.026 "driver_specific": {} 00:07:42.026 } 00:07:42.026 ] 00:07:42.026 06:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:07:42.026 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:42.026 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:42.026 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:42.026 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:42.026 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:42.026 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:42.026 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:42.026 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:42.026 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:42.026 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:42.026 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:42.026 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:42.026 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.026 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:42.283 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:42.283 "name": "Existed_Raid", 00:07:42.283 "uuid": "519ee1d8-d626-4fb6-a9a8-cb7bbdf40153", 00:07:42.283 "strip_size_kb": 64, 00:07:42.283 "state": "online", 00:07:42.283 "raid_level": "concat", 00:07:42.283 "superblock": true, 00:07:42.283 "num_base_bdevs": 2, 00:07:42.283 "num_base_bdevs_discovered": 2, 00:07:42.283 "num_base_bdevs_operational": 2, 00:07:42.283 "base_bdevs_list": [ 00:07:42.283 { 00:07:42.283 "name": "BaseBdev1", 00:07:42.283 "uuid": "431526a5-02a7-45b0-b6db-7b18126cdd73", 00:07:42.283 "is_configured": true, 00:07:42.283 "data_offset": 2048, 00:07:42.283 "data_size": 63488 00:07:42.283 }, 00:07:42.283 { 00:07:42.283 "name": "BaseBdev2", 00:07:42.283 "uuid": "4841de91-11a2-4224-8ead-192402a3c949", 00:07:42.283 "is_configured": true, 00:07:42.283 "data_offset": 2048, 00:07:42.283 "data_size": 63488 00:07:42.283 } 00:07:42.283 ] 00:07:42.283 }' 00:07:42.283 06:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:42.283 06:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.848 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:42.848 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:42.848 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:42.848 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:42.848 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:42.848 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:07:42.848 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:42.848 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:43.106 [2024-08-14 06:39:10.263344] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.106 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:43.106 "name": "Existed_Raid", 00:07:43.106 "aliases": [ 00:07:43.106 "519ee1d8-d626-4fb6-a9a8-cb7bbdf40153" 00:07:43.106 ], 00:07:43.106 "product_name": "Raid Volume", 00:07:43.106 "block_size": 512, 00:07:43.106 "num_blocks": 126976, 00:07:43.106 "uuid": "519ee1d8-d626-4fb6-a9a8-cb7bbdf40153", 00:07:43.106 "assigned_rate_limits": { 00:07:43.106 "rw_ios_per_sec": 0, 00:07:43.106 "rw_mbytes_per_sec": 0, 00:07:43.106 "r_mbytes_per_sec": 0, 00:07:43.106 "w_mbytes_per_sec": 0 00:07:43.106 }, 00:07:43.106 "claimed": false, 00:07:43.106 "zoned": false, 00:07:43.106 "supported_io_types": { 00:07:43.106 "read": true, 00:07:43.107 "write": true, 00:07:43.107 "unmap": true, 00:07:43.107 "flush": true, 00:07:43.107 "reset": true, 00:07:43.107 "nvme_admin": false, 00:07:43.107 "nvme_io": false, 00:07:43.107 "nvme_io_md": false, 00:07:43.107 "write_zeroes": true, 00:07:43.107 "zcopy": false, 00:07:43.107 "get_zone_info": false, 00:07:43.107 "zone_management": false, 00:07:43.107 "zone_append": false, 00:07:43.107 "compare": false, 00:07:43.107 "compare_and_write": false, 00:07:43.107 "abort": false, 00:07:43.107 "seek_hole": false, 00:07:43.107 "seek_data": false, 00:07:43.107 "copy": false, 00:07:43.107 "nvme_iov_md": false 00:07:43.107 }, 00:07:43.107 "memory_domains": [ 00:07:43.107 { 00:07:43.107 "dma_device_id": "system", 00:07:43.107 "dma_device_type": 1 00:07:43.107 }, 00:07:43.107 { 00:07:43.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.107 "dma_device_type": 2 00:07:43.107 }, 00:07:43.107 { 00:07:43.107 "dma_device_id": "system", 00:07:43.107 "dma_device_type": 1 00:07:43.107 }, 00:07:43.107 { 00:07:43.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.107 "dma_device_type": 2 00:07:43.107 } 00:07:43.107 ], 00:07:43.107 "driver_specific": { 00:07:43.107 "raid": { 00:07:43.107 "uuid": "519ee1d8-d626-4fb6-a9a8-cb7bbdf40153", 00:07:43.107 "strip_size_kb": 64, 00:07:43.107 "state": "online", 00:07:43.107 "raid_level": "concat", 00:07:43.107 "superblock": true, 00:07:43.107 "num_base_bdevs": 2, 00:07:43.107 "num_base_bdevs_discovered": 2, 00:07:43.107 "num_base_bdevs_operational": 2, 00:07:43.107 "base_bdevs_list": [ 00:07:43.107 { 00:07:43.107 "name": "BaseBdev1", 00:07:43.107 "uuid": "431526a5-02a7-45b0-b6db-7b18126cdd73", 00:07:43.107 "is_configured": true, 00:07:43.107 "data_offset": 2048, 00:07:43.107 "data_size": 63488 00:07:43.107 }, 00:07:43.107 { 00:07:43.107 "name": "BaseBdev2", 00:07:43.107 "uuid": "4841de91-11a2-4224-8ead-192402a3c949", 00:07:43.107 "is_configured": true, 00:07:43.107 "data_offset": 2048, 00:07:43.107 "data_size": 63488 00:07:43.107 } 00:07:43.107 ] 00:07:43.107 } 00:07:43.107 } 00:07:43.107 }' 00:07:43.107 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.107 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:43.107 BaseBdev2' 00:07:43.107 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:43.107 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:43.107 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:43.366 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:43.366 "name": "BaseBdev1", 00:07:43.366 "aliases": [ 00:07:43.366 "431526a5-02a7-45b0-b6db-7b18126cdd73" 00:07:43.366 ], 00:07:43.366 "product_name": "Malloc disk", 00:07:43.366 "block_size": 512, 00:07:43.366 "num_blocks": 65536, 00:07:43.366 "uuid": "431526a5-02a7-45b0-b6db-7b18126cdd73", 00:07:43.366 "assigned_rate_limits": { 00:07:43.366 "rw_ios_per_sec": 0, 00:07:43.366 "rw_mbytes_per_sec": 0, 00:07:43.366 "r_mbytes_per_sec": 0, 00:07:43.366 "w_mbytes_per_sec": 0 00:07:43.366 }, 00:07:43.366 "claimed": true, 00:07:43.366 "claim_type": "exclusive_write", 00:07:43.366 "zoned": false, 00:07:43.366 "supported_io_types": { 00:07:43.366 "read": true, 00:07:43.366 "write": true, 00:07:43.366 "unmap": true, 00:07:43.367 "flush": true, 00:07:43.367 "reset": true, 00:07:43.367 "nvme_admin": false, 00:07:43.367 "nvme_io": false, 00:07:43.367 "nvme_io_md": false, 00:07:43.367 "write_zeroes": true, 00:07:43.367 "zcopy": true, 00:07:43.367 "get_zone_info": false, 00:07:43.367 "zone_management": false, 00:07:43.367 "zone_append": false, 00:07:43.367 "compare": false, 00:07:43.367 "compare_and_write": false, 00:07:43.367 "abort": true, 00:07:43.367 "seek_hole": false, 00:07:43.367 "seek_data": false, 00:07:43.367 "copy": true, 00:07:43.367 "nvme_iov_md": false 00:07:43.367 }, 00:07:43.367 "memory_domains": [ 00:07:43.367 { 00:07:43.367 "dma_device_id": "system", 00:07:43.367 "dma_device_type": 1 00:07:43.367 }, 00:07:43.367 { 00:07:43.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.367 "dma_device_type": 2 00:07:43.367 } 00:07:43.367 ], 00:07:43.367 "driver_specific": {} 00:07:43.367 }' 00:07:43.367 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:43.626 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:43.626 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:43.626 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:43.626 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:43.626 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:43.626 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:43.626 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:43.626 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:43.626 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:43.884 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:43.884 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:43.884 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:43.884 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:43.884 06:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:44.143 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:44.143 "name": "BaseBdev2", 00:07:44.143 "aliases": [ 00:07:44.143 "4841de91-11a2-4224-8ead-192402a3c949" 00:07:44.143 ], 00:07:44.143 "product_name": "Malloc disk", 00:07:44.143 "block_size": 512, 00:07:44.143 "num_blocks": 65536, 00:07:44.143 "uuid": "4841de91-11a2-4224-8ead-192402a3c949", 00:07:44.143 "assigned_rate_limits": { 00:07:44.143 "rw_ios_per_sec": 0, 00:07:44.143 "rw_mbytes_per_sec": 0, 00:07:44.143 "r_mbytes_per_sec": 0, 00:07:44.143 "w_mbytes_per_sec": 0 00:07:44.143 }, 00:07:44.143 "claimed": true, 00:07:44.143 "claim_type": "exclusive_write", 00:07:44.143 "zoned": false, 00:07:44.143 "supported_io_types": { 00:07:44.143 "read": true, 00:07:44.143 "write": true, 00:07:44.143 "unmap": true, 00:07:44.143 "flush": true, 00:07:44.143 "reset": true, 00:07:44.143 "nvme_admin": false, 00:07:44.143 "nvme_io": false, 00:07:44.143 "nvme_io_md": false, 00:07:44.143 "write_zeroes": true, 00:07:44.143 "zcopy": true, 00:07:44.143 "get_zone_info": false, 00:07:44.143 "zone_management": false, 00:07:44.143 "zone_append": false, 00:07:44.143 "compare": false, 00:07:44.143 "compare_and_write": false, 00:07:44.143 "abort": true, 00:07:44.143 "seek_hole": false, 00:07:44.143 "seek_data": false, 00:07:44.143 "copy": true, 00:07:44.143 "nvme_iov_md": false 00:07:44.143 }, 00:07:44.143 "memory_domains": [ 00:07:44.143 { 00:07:44.143 "dma_device_id": "system", 00:07:44.143 "dma_device_type": 1 00:07:44.143 }, 00:07:44.143 { 00:07:44.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.143 "dma_device_type": 2 00:07:44.143 } 00:07:44.143 ], 00:07:44.143 "driver_specific": {} 00:07:44.143 }' 00:07:44.143 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:44.143 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:44.143 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:44.143 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:44.143 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:44.401 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:44.401 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:44.401 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:44.401 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:44.401 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:44.401 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:44.401 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:44.401 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:44.659 [2024-08-14 06:39:11.844533] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:44.659 [2024-08-14 06:39:11.844584] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.659 [2024-08-14 06:39:11.844664] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:44.659 06:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.918 06:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:44.918 "name": "Existed_Raid", 00:07:44.918 "uuid": "519ee1d8-d626-4fb6-a9a8-cb7bbdf40153", 00:07:44.918 "strip_size_kb": 64, 00:07:44.918 "state": "offline", 00:07:44.918 "raid_level": "concat", 00:07:44.918 "superblock": true, 00:07:44.918 "num_base_bdevs": 2, 00:07:44.918 "num_base_bdevs_discovered": 1, 00:07:44.918 "num_base_bdevs_operational": 1, 00:07:44.918 "base_bdevs_list": [ 00:07:44.918 { 00:07:44.918 "name": null, 00:07:44.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.918 "is_configured": false, 00:07:44.918 "data_offset": 2048, 00:07:44.918 "data_size": 63488 00:07:44.918 }, 00:07:44.918 { 00:07:44.918 "name": "BaseBdev2", 00:07:44.918 "uuid": "4841de91-11a2-4224-8ead-192402a3c949", 00:07:44.918 "is_configured": true, 00:07:44.918 "data_offset": 2048, 00:07:44.918 "data_size": 63488 00:07:44.918 } 00:07:44.918 ] 00:07:44.918 }' 00:07:44.918 06:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:44.918 06:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.487 06:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:45.487 06:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:45.487 06:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:45.488 06:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:45.748 06:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:45.748 06:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:45.748 06:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:46.007 [2024-08-14 06:39:13.162724] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:46.007 [2024-08-14 06:39:13.162892] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:46.007 06:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:46.007 06:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:46.007 06:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:46.007 06:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:46.267 06:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:46.267 06:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:46.267 06:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:46.267 06:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 72755 00:07:46.267 06:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 72755 ']' 00:07:46.267 06:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 72755 00:07:46.267 06:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:07:46.267 06:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:46.267 06:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72755 00:07:46.267 06:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:46.267 killing process with pid 72755 00:07:46.267 06:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:46.267 06:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72755' 00:07:46.267 06:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 72755 00:07:46.267 [2024-08-14 06:39:13.508023] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.267 06:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 72755 00:07:46.267 [2024-08-14 06:39:13.509149] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:46.527 06:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:07:46.527 00:07:46.527 real 0m10.410s 00:07:46.527 user 0m18.762s 00:07:46.527 sys 0m1.556s 00:07:46.527 ************************************ 00:07:46.527 END TEST raid_state_function_test_sb 00:07:46.527 ************************************ 00:07:46.527 06:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.527 06:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.785 06:39:13 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:46.785 06:39:13 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:46.785 06:39:13 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.785 06:39:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:46.785 ************************************ 00:07:46.785 START TEST raid_superblock_test 00:07:46.785 ************************************ 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 2 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=73111 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 73111 /var/tmp/spdk-raid.sock 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 73111 ']' 00:07:46.785 06:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:46.786 06:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:46.786 06:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:46.786 06:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:46.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:46.786 06:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:46.786 06:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.786 [2024-08-14 06:39:13.921315] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:07:46.786 [2024-08-14 06:39:13.921571] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73111 ] 00:07:47.045 [2024-08-14 06:39:14.072737] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.045 [2024-08-14 06:39:14.128482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.045 [2024-08-14 06:39:14.173774] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.045 [2024-08-14 06:39:14.173816] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.614 06:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:47.614 06:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:07:47.614 06:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:07:47.614 06:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:07:47.614 06:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:07:47.614 06:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:07:47.614 06:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:47.614 06:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:47.614 06:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:07:47.614 06:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:47.614 06:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:47.952 malloc1 00:07:47.952 06:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:48.216 [2024-08-14 06:39:15.352379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:48.216 [2024-08-14 06:39:15.352473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.216 [2024-08-14 06:39:15.352510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:48.216 [2024-08-14 06:39:15.352523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.216 [2024-08-14 06:39:15.355025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.216 pt1 00:07:48.216 [2024-08-14 06:39:15.355124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:48.216 06:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:07:48.216 06:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:07:48.216 06:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:07:48.216 06:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:07:48.216 06:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:48.216 06:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:48.216 06:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:07:48.216 06:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:48.216 06:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:48.475 malloc2 00:07:48.475 06:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:48.735 [2024-08-14 06:39:15.885474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:48.735 [2024-08-14 06:39:15.885652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.735 [2024-08-14 06:39:15.885701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:48.735 [2024-08-14 06:39:15.885745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.735 [2024-08-14 06:39:15.888290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.735 [2024-08-14 06:39:15.888385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:48.735 pt2 00:07:48.735 06:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:07:48.735 06:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:07:48.735 06:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:07:48.994 [2024-08-14 06:39:16.129466] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:48.994 [2024-08-14 06:39:16.131722] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:48.994 [2024-08-14 06:39:16.131962] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:48.994 [2024-08-14 06:39:16.132018] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:48.994 [2024-08-14 06:39:16.132405] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:48.994 [2024-08-14 06:39:16.132624] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:48.994 [2024-08-14 06:39:16.132679] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:48.994 [2024-08-14 06:39:16.132900] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.994 06:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:48.994 06:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:48.994 06:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:48.994 06:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:48.994 06:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:48.994 06:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:48.994 06:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:48.994 06:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:48.994 06:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:48.994 06:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:48.994 06:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:48.994 06:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.254 06:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:49.254 "name": "raid_bdev1", 00:07:49.254 "uuid": "a23c7590-c1b8-4fda-8f87-31c6fb84cd2a", 00:07:49.254 "strip_size_kb": 64, 00:07:49.254 "state": "online", 00:07:49.254 "raid_level": "concat", 00:07:49.254 "superblock": true, 00:07:49.254 "num_base_bdevs": 2, 00:07:49.254 "num_base_bdevs_discovered": 2, 00:07:49.254 "num_base_bdevs_operational": 2, 00:07:49.254 "base_bdevs_list": [ 00:07:49.254 { 00:07:49.254 "name": "pt1", 00:07:49.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.254 "is_configured": true, 00:07:49.254 "data_offset": 2048, 00:07:49.254 "data_size": 63488 00:07:49.254 }, 00:07:49.254 { 00:07:49.254 "name": "pt2", 00:07:49.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.254 "is_configured": true, 00:07:49.254 "data_offset": 2048, 00:07:49.254 "data_size": 63488 00:07:49.254 } 00:07:49.254 ] 00:07:49.254 }' 00:07:49.254 06:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:49.254 06:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.822 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:07:49.822 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:49.822 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:49.822 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:49.822 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:49.822 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:49.822 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:49.822 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:50.081 [2024-08-14 06:39:17.256760] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.081 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:50.081 "name": "raid_bdev1", 00:07:50.081 "aliases": [ 00:07:50.081 "a23c7590-c1b8-4fda-8f87-31c6fb84cd2a" 00:07:50.081 ], 00:07:50.081 "product_name": "Raid Volume", 00:07:50.081 "block_size": 512, 00:07:50.081 "num_blocks": 126976, 00:07:50.081 "uuid": "a23c7590-c1b8-4fda-8f87-31c6fb84cd2a", 00:07:50.081 "assigned_rate_limits": { 00:07:50.081 "rw_ios_per_sec": 0, 00:07:50.081 "rw_mbytes_per_sec": 0, 00:07:50.081 "r_mbytes_per_sec": 0, 00:07:50.081 "w_mbytes_per_sec": 0 00:07:50.081 }, 00:07:50.081 "claimed": false, 00:07:50.081 "zoned": false, 00:07:50.081 "supported_io_types": { 00:07:50.081 "read": true, 00:07:50.081 "write": true, 00:07:50.081 "unmap": true, 00:07:50.081 "flush": true, 00:07:50.081 "reset": true, 00:07:50.081 "nvme_admin": false, 00:07:50.081 "nvme_io": false, 00:07:50.081 "nvme_io_md": false, 00:07:50.081 "write_zeroes": true, 00:07:50.081 "zcopy": false, 00:07:50.081 "get_zone_info": false, 00:07:50.081 "zone_management": false, 00:07:50.082 "zone_append": false, 00:07:50.082 "compare": false, 00:07:50.082 "compare_and_write": false, 00:07:50.082 "abort": false, 00:07:50.082 "seek_hole": false, 00:07:50.082 "seek_data": false, 00:07:50.082 "copy": false, 00:07:50.082 "nvme_iov_md": false 00:07:50.082 }, 00:07:50.082 "memory_domains": [ 00:07:50.082 { 00:07:50.082 "dma_device_id": "system", 00:07:50.082 "dma_device_type": 1 00:07:50.082 }, 00:07:50.082 { 00:07:50.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.082 "dma_device_type": 2 00:07:50.082 }, 00:07:50.082 { 00:07:50.082 "dma_device_id": "system", 00:07:50.082 "dma_device_type": 1 00:07:50.082 }, 00:07:50.082 { 00:07:50.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.082 "dma_device_type": 2 00:07:50.082 } 00:07:50.082 ], 00:07:50.082 "driver_specific": { 00:07:50.082 "raid": { 00:07:50.082 "uuid": "a23c7590-c1b8-4fda-8f87-31c6fb84cd2a", 00:07:50.082 "strip_size_kb": 64, 00:07:50.082 "state": "online", 00:07:50.082 "raid_level": "concat", 00:07:50.082 "superblock": true, 00:07:50.082 "num_base_bdevs": 2, 00:07:50.082 "num_base_bdevs_discovered": 2, 00:07:50.082 "num_base_bdevs_operational": 2, 00:07:50.082 "base_bdevs_list": [ 00:07:50.082 { 00:07:50.082 "name": "pt1", 00:07:50.082 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.082 "is_configured": true, 00:07:50.082 "data_offset": 2048, 00:07:50.082 "data_size": 63488 00:07:50.082 }, 00:07:50.082 { 00:07:50.082 "name": "pt2", 00:07:50.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.082 "is_configured": true, 00:07:50.082 "data_offset": 2048, 00:07:50.082 "data_size": 63488 00:07:50.082 } 00:07:50.082 ] 00:07:50.082 } 00:07:50.082 } 00:07:50.082 }' 00:07:50.082 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:50.082 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:50.082 pt2' 00:07:50.082 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:50.082 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:50.082 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:50.340 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:50.340 "name": "pt1", 00:07:50.340 "aliases": [ 00:07:50.340 "00000000-0000-0000-0000-000000000001" 00:07:50.340 ], 00:07:50.340 "product_name": "passthru", 00:07:50.340 "block_size": 512, 00:07:50.340 "num_blocks": 65536, 00:07:50.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.340 "assigned_rate_limits": { 00:07:50.340 "rw_ios_per_sec": 0, 00:07:50.340 "rw_mbytes_per_sec": 0, 00:07:50.340 "r_mbytes_per_sec": 0, 00:07:50.340 "w_mbytes_per_sec": 0 00:07:50.340 }, 00:07:50.340 "claimed": true, 00:07:50.340 "claim_type": "exclusive_write", 00:07:50.340 "zoned": false, 00:07:50.340 "supported_io_types": { 00:07:50.340 "read": true, 00:07:50.340 "write": true, 00:07:50.340 "unmap": true, 00:07:50.340 "flush": true, 00:07:50.340 "reset": true, 00:07:50.340 "nvme_admin": false, 00:07:50.340 "nvme_io": false, 00:07:50.340 "nvme_io_md": false, 00:07:50.340 "write_zeroes": true, 00:07:50.340 "zcopy": true, 00:07:50.340 "get_zone_info": false, 00:07:50.340 "zone_management": false, 00:07:50.340 "zone_append": false, 00:07:50.341 "compare": false, 00:07:50.341 "compare_and_write": false, 00:07:50.341 "abort": true, 00:07:50.341 "seek_hole": false, 00:07:50.341 "seek_data": false, 00:07:50.341 "copy": true, 00:07:50.341 "nvme_iov_md": false 00:07:50.341 }, 00:07:50.341 "memory_domains": [ 00:07:50.341 { 00:07:50.341 "dma_device_id": "system", 00:07:50.341 "dma_device_type": 1 00:07:50.341 }, 00:07:50.341 { 00:07:50.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.341 "dma_device_type": 2 00:07:50.341 } 00:07:50.341 ], 00:07:50.341 "driver_specific": { 00:07:50.341 "passthru": { 00:07:50.341 "name": "pt1", 00:07:50.341 "base_bdev_name": "malloc1" 00:07:50.341 } 00:07:50.341 } 00:07:50.341 }' 00:07:50.341 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:50.599 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:50.599 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:50.599 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:50.599 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:50.599 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:50.599 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:50.599 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:50.858 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:50.858 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:50.858 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:50.858 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:50.858 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:50.858 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:50.858 06:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:51.116 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:51.116 "name": "pt2", 00:07:51.117 "aliases": [ 00:07:51.117 "00000000-0000-0000-0000-000000000002" 00:07:51.117 ], 00:07:51.117 "product_name": "passthru", 00:07:51.117 "block_size": 512, 00:07:51.117 "num_blocks": 65536, 00:07:51.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.117 "assigned_rate_limits": { 00:07:51.117 "rw_ios_per_sec": 0, 00:07:51.117 "rw_mbytes_per_sec": 0, 00:07:51.117 "r_mbytes_per_sec": 0, 00:07:51.117 "w_mbytes_per_sec": 0 00:07:51.117 }, 00:07:51.117 "claimed": true, 00:07:51.117 "claim_type": "exclusive_write", 00:07:51.117 "zoned": false, 00:07:51.117 "supported_io_types": { 00:07:51.117 "read": true, 00:07:51.117 "write": true, 00:07:51.117 "unmap": true, 00:07:51.117 "flush": true, 00:07:51.117 "reset": true, 00:07:51.117 "nvme_admin": false, 00:07:51.117 "nvme_io": false, 00:07:51.117 "nvme_io_md": false, 00:07:51.117 "write_zeroes": true, 00:07:51.117 "zcopy": true, 00:07:51.117 "get_zone_info": false, 00:07:51.117 "zone_management": false, 00:07:51.117 "zone_append": false, 00:07:51.117 "compare": false, 00:07:51.117 "compare_and_write": false, 00:07:51.117 "abort": true, 00:07:51.117 "seek_hole": false, 00:07:51.117 "seek_data": false, 00:07:51.117 "copy": true, 00:07:51.117 "nvme_iov_md": false 00:07:51.117 }, 00:07:51.117 "memory_domains": [ 00:07:51.117 { 00:07:51.117 "dma_device_id": "system", 00:07:51.117 "dma_device_type": 1 00:07:51.117 }, 00:07:51.117 { 00:07:51.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.117 "dma_device_type": 2 00:07:51.117 } 00:07:51.117 ], 00:07:51.117 "driver_specific": { 00:07:51.117 "passthru": { 00:07:51.117 "name": "pt2", 00:07:51.117 "base_bdev_name": "malloc2" 00:07:51.117 } 00:07:51.117 } 00:07:51.117 }' 00:07:51.117 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:51.117 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:51.117 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:51.117 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:51.117 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:51.375 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:51.375 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:51.375 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:51.375 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:51.375 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:51.375 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:51.375 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:51.375 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:07:51.375 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:51.634 [2024-08-14 06:39:18.839779] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.634 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=a23c7590-c1b8-4fda-8f87-31c6fb84cd2a 00:07:51.634 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z a23c7590-c1b8-4fda-8f87-31c6fb84cd2a ']' 00:07:51.634 06:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:51.894 [2024-08-14 06:39:19.111059] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:51.894 [2024-08-14 06:39:19.111104] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.894 [2024-08-14 06:39:19.111236] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.894 [2024-08-14 06:39:19.111296] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.894 [2024-08-14 06:39:19.111315] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:51.894 06:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:51.894 06:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:07:52.154 06:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:07:52.154 06:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:07:52.154 06:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:07:52.154 06:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:52.723 06:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:07:52.723 06:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:52.723 06:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:52.723 06:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:52.983 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:07:52.983 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:52.983 06:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:07:52.983 06:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:52.983 06:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:52.983 06:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:52.983 06:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:52.983 06:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:52.983 06:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:52.983 06:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:52.983 06:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:52.983 06:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:52.983 06:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:53.242 [2024-08-14 06:39:20.417386] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:53.242 [2024-08-14 06:39:20.419681] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:53.242 [2024-08-14 06:39:20.419812] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:53.242 [2024-08-14 06:39:20.419920] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:53.242 [2024-08-14 06:39:20.419984] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:53.242 [2024-08-14 06:39:20.420036] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:53.242 request: 00:07:53.242 { 00:07:53.242 "name": "raid_bdev1", 00:07:53.242 "raid_level": "concat", 00:07:53.242 "base_bdevs": [ 00:07:53.242 "malloc1", 00:07:53.242 "malloc2" 00:07:53.242 ], 00:07:53.242 "strip_size_kb": 64, 00:07:53.242 "superblock": false, 00:07:53.242 "method": "bdev_raid_create", 00:07:53.242 "req_id": 1 00:07:53.242 } 00:07:53.242 Got JSON-RPC error response 00:07:53.242 response: 00:07:53.242 { 00:07:53.242 "code": -17, 00:07:53.242 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:53.242 } 00:07:53.243 06:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:07:53.243 06:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:53.243 06:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:07:53.243 06:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:53.243 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:53.243 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:07:53.502 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:07:53.502 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:07:53.502 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:53.762 [2024-08-14 06:39:20.920454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:53.762 [2024-08-14 06:39:20.920621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.762 [2024-08-14 06:39:20.920679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:53.762 [2024-08-14 06:39:20.920718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.762 [2024-08-14 06:39:20.923278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.762 [2024-08-14 06:39:20.923376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:53.762 [2024-08-14 06:39:20.923502] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:53.762 [2024-08-14 06:39:20.923595] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:53.762 pt1 00:07:53.762 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:53.762 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:53.762 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:53.762 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:53.762 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:53.762 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:53.762 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:53.762 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:53.762 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:53.762 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:53.762 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.762 06:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:54.022 06:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:54.022 "name": "raid_bdev1", 00:07:54.022 "uuid": "a23c7590-c1b8-4fda-8f87-31c6fb84cd2a", 00:07:54.022 "strip_size_kb": 64, 00:07:54.022 "state": "configuring", 00:07:54.022 "raid_level": "concat", 00:07:54.022 "superblock": true, 00:07:54.022 "num_base_bdevs": 2, 00:07:54.022 "num_base_bdevs_discovered": 1, 00:07:54.022 "num_base_bdevs_operational": 2, 00:07:54.022 "base_bdevs_list": [ 00:07:54.022 { 00:07:54.022 "name": "pt1", 00:07:54.022 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.022 "is_configured": true, 00:07:54.022 "data_offset": 2048, 00:07:54.022 "data_size": 63488 00:07:54.022 }, 00:07:54.022 { 00:07:54.022 "name": null, 00:07:54.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.022 "is_configured": false, 00:07:54.022 "data_offset": 2048, 00:07:54.022 "data_size": 63488 00:07:54.022 } 00:07:54.022 ] 00:07:54.022 }' 00:07:54.022 06:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:54.022 06:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.592 06:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:07:54.592 06:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:07:54.592 06:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:07:54.592 06:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:54.852 [2024-08-14 06:39:22.026856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:54.852 [2024-08-14 06:39:22.026951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.852 [2024-08-14 06:39:22.026975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:54.852 [2024-08-14 06:39:22.026989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.852 [2024-08-14 06:39:22.027470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.852 [2024-08-14 06:39:22.027501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:54.852 [2024-08-14 06:39:22.027598] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:54.852 [2024-08-14 06:39:22.027627] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:54.852 [2024-08-14 06:39:22.027745] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:54.852 [2024-08-14 06:39:22.027758] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:54.852 [2024-08-14 06:39:22.028039] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:54.852 [2024-08-14 06:39:22.028185] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:54.852 [2024-08-14 06:39:22.028205] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:54.853 [2024-08-14 06:39:22.028324] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.853 pt2 00:07:54.853 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:07:54.853 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:07:54.853 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:54.853 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:54.853 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:54.853 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:54.853 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:54.853 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:54.853 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:54.853 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:54.853 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:54.853 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:54.853 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:54.853 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.113 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:55.113 "name": "raid_bdev1", 00:07:55.113 "uuid": "a23c7590-c1b8-4fda-8f87-31c6fb84cd2a", 00:07:55.113 "strip_size_kb": 64, 00:07:55.113 "state": "online", 00:07:55.113 "raid_level": "concat", 00:07:55.113 "superblock": true, 00:07:55.113 "num_base_bdevs": 2, 00:07:55.113 "num_base_bdevs_discovered": 2, 00:07:55.113 "num_base_bdevs_operational": 2, 00:07:55.113 "base_bdevs_list": [ 00:07:55.113 { 00:07:55.113 "name": "pt1", 00:07:55.113 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.113 "is_configured": true, 00:07:55.113 "data_offset": 2048, 00:07:55.113 "data_size": 63488 00:07:55.113 }, 00:07:55.113 { 00:07:55.113 "name": "pt2", 00:07:55.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.113 "is_configured": true, 00:07:55.113 "data_offset": 2048, 00:07:55.113 "data_size": 63488 00:07:55.113 } 00:07:55.113 ] 00:07:55.113 }' 00:07:55.113 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:55.113 06:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.682 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:07:55.682 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:55.682 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:55.682 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:55.682 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:55.682 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:55.682 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:55.682 06:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:55.942 [2024-08-14 06:39:23.129679] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.942 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:55.942 "name": "raid_bdev1", 00:07:55.942 "aliases": [ 00:07:55.942 "a23c7590-c1b8-4fda-8f87-31c6fb84cd2a" 00:07:55.942 ], 00:07:55.942 "product_name": "Raid Volume", 00:07:55.942 "block_size": 512, 00:07:55.942 "num_blocks": 126976, 00:07:55.942 "uuid": "a23c7590-c1b8-4fda-8f87-31c6fb84cd2a", 00:07:55.942 "assigned_rate_limits": { 00:07:55.942 "rw_ios_per_sec": 0, 00:07:55.942 "rw_mbytes_per_sec": 0, 00:07:55.942 "r_mbytes_per_sec": 0, 00:07:55.942 "w_mbytes_per_sec": 0 00:07:55.942 }, 00:07:55.942 "claimed": false, 00:07:55.942 "zoned": false, 00:07:55.942 "supported_io_types": { 00:07:55.942 "read": true, 00:07:55.942 "write": true, 00:07:55.942 "unmap": true, 00:07:55.942 "flush": true, 00:07:55.942 "reset": true, 00:07:55.942 "nvme_admin": false, 00:07:55.942 "nvme_io": false, 00:07:55.942 "nvme_io_md": false, 00:07:55.942 "write_zeroes": true, 00:07:55.942 "zcopy": false, 00:07:55.942 "get_zone_info": false, 00:07:55.942 "zone_management": false, 00:07:55.942 "zone_append": false, 00:07:55.942 "compare": false, 00:07:55.942 "compare_and_write": false, 00:07:55.942 "abort": false, 00:07:55.942 "seek_hole": false, 00:07:55.942 "seek_data": false, 00:07:55.942 "copy": false, 00:07:55.942 "nvme_iov_md": false 00:07:55.942 }, 00:07:55.942 "memory_domains": [ 00:07:55.942 { 00:07:55.942 "dma_device_id": "system", 00:07:55.942 "dma_device_type": 1 00:07:55.942 }, 00:07:55.942 { 00:07:55.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.942 "dma_device_type": 2 00:07:55.942 }, 00:07:55.942 { 00:07:55.942 "dma_device_id": "system", 00:07:55.942 "dma_device_type": 1 00:07:55.942 }, 00:07:55.942 { 00:07:55.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.942 "dma_device_type": 2 00:07:55.942 } 00:07:55.942 ], 00:07:55.942 "driver_specific": { 00:07:55.942 "raid": { 00:07:55.942 "uuid": "a23c7590-c1b8-4fda-8f87-31c6fb84cd2a", 00:07:55.942 "strip_size_kb": 64, 00:07:55.942 "state": "online", 00:07:55.942 "raid_level": "concat", 00:07:55.942 "superblock": true, 00:07:55.942 "num_base_bdevs": 2, 00:07:55.942 "num_base_bdevs_discovered": 2, 00:07:55.942 "num_base_bdevs_operational": 2, 00:07:55.942 "base_bdevs_list": [ 00:07:55.942 { 00:07:55.943 "name": "pt1", 00:07:55.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.943 "is_configured": true, 00:07:55.943 "data_offset": 2048, 00:07:55.943 "data_size": 63488 00:07:55.943 }, 00:07:55.943 { 00:07:55.943 "name": "pt2", 00:07:55.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.943 "is_configured": true, 00:07:55.943 "data_offset": 2048, 00:07:55.943 "data_size": 63488 00:07:55.943 } 00:07:55.943 ] 00:07:55.943 } 00:07:55.943 } 00:07:55.943 }' 00:07:55.943 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.943 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:55.943 pt2' 00:07:55.943 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:55.943 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:55.943 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:56.202 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:56.202 "name": "pt1", 00:07:56.202 "aliases": [ 00:07:56.202 "00000000-0000-0000-0000-000000000001" 00:07:56.202 ], 00:07:56.202 "product_name": "passthru", 00:07:56.202 "block_size": 512, 00:07:56.202 "num_blocks": 65536, 00:07:56.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.202 "assigned_rate_limits": { 00:07:56.202 "rw_ios_per_sec": 0, 00:07:56.202 "rw_mbytes_per_sec": 0, 00:07:56.202 "r_mbytes_per_sec": 0, 00:07:56.202 "w_mbytes_per_sec": 0 00:07:56.202 }, 00:07:56.202 "claimed": true, 00:07:56.202 "claim_type": "exclusive_write", 00:07:56.202 "zoned": false, 00:07:56.202 "supported_io_types": { 00:07:56.202 "read": true, 00:07:56.202 "write": true, 00:07:56.202 "unmap": true, 00:07:56.202 "flush": true, 00:07:56.202 "reset": true, 00:07:56.202 "nvme_admin": false, 00:07:56.202 "nvme_io": false, 00:07:56.202 "nvme_io_md": false, 00:07:56.202 "write_zeroes": true, 00:07:56.202 "zcopy": true, 00:07:56.202 "get_zone_info": false, 00:07:56.202 "zone_management": false, 00:07:56.202 "zone_append": false, 00:07:56.202 "compare": false, 00:07:56.202 "compare_and_write": false, 00:07:56.202 "abort": true, 00:07:56.202 "seek_hole": false, 00:07:56.202 "seek_data": false, 00:07:56.202 "copy": true, 00:07:56.202 "nvme_iov_md": false 00:07:56.202 }, 00:07:56.202 "memory_domains": [ 00:07:56.202 { 00:07:56.202 "dma_device_id": "system", 00:07:56.202 "dma_device_type": 1 00:07:56.202 }, 00:07:56.202 { 00:07:56.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.202 "dma_device_type": 2 00:07:56.202 } 00:07:56.202 ], 00:07:56.202 "driver_specific": { 00:07:56.202 "passthru": { 00:07:56.202 "name": "pt1", 00:07:56.202 "base_bdev_name": "malloc1" 00:07:56.202 } 00:07:56.202 } 00:07:56.202 }' 00:07:56.202 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:56.462 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:56.462 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:56.462 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:56.462 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:56.462 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:56.462 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:56.462 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:56.462 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:56.462 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:56.722 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:56.722 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:56.722 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:56.722 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:56.722 06:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:56.982 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:56.982 "name": "pt2", 00:07:56.982 "aliases": [ 00:07:56.982 "00000000-0000-0000-0000-000000000002" 00:07:56.982 ], 00:07:56.982 "product_name": "passthru", 00:07:56.982 "block_size": 512, 00:07:56.982 "num_blocks": 65536, 00:07:56.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.982 "assigned_rate_limits": { 00:07:56.982 "rw_ios_per_sec": 0, 00:07:56.982 "rw_mbytes_per_sec": 0, 00:07:56.982 "r_mbytes_per_sec": 0, 00:07:56.982 "w_mbytes_per_sec": 0 00:07:56.982 }, 00:07:56.982 "claimed": true, 00:07:56.982 "claim_type": "exclusive_write", 00:07:56.982 "zoned": false, 00:07:56.982 "supported_io_types": { 00:07:56.982 "read": true, 00:07:56.982 "write": true, 00:07:56.982 "unmap": true, 00:07:56.982 "flush": true, 00:07:56.982 "reset": true, 00:07:56.982 "nvme_admin": false, 00:07:56.982 "nvme_io": false, 00:07:56.982 "nvme_io_md": false, 00:07:56.982 "write_zeroes": true, 00:07:56.982 "zcopy": true, 00:07:56.982 "get_zone_info": false, 00:07:56.982 "zone_management": false, 00:07:56.982 "zone_append": false, 00:07:56.982 "compare": false, 00:07:56.982 "compare_and_write": false, 00:07:56.982 "abort": true, 00:07:56.982 "seek_hole": false, 00:07:56.982 "seek_data": false, 00:07:56.982 "copy": true, 00:07:56.982 "nvme_iov_md": false 00:07:56.982 }, 00:07:56.982 "memory_domains": [ 00:07:56.982 { 00:07:56.982 "dma_device_id": "system", 00:07:56.982 "dma_device_type": 1 00:07:56.982 }, 00:07:56.982 { 00:07:56.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.982 "dma_device_type": 2 00:07:56.982 } 00:07:56.982 ], 00:07:56.982 "driver_specific": { 00:07:56.982 "passthru": { 00:07:56.982 "name": "pt2", 00:07:56.982 "base_bdev_name": "malloc2" 00:07:56.982 } 00:07:56.982 } 00:07:56.982 }' 00:07:56.982 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:56.982 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:56.982 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:56.982 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:56.982 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:56.982 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:56.982 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:57.242 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:57.242 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:57.242 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:57.242 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:57.242 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:57.242 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:57.242 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:07:57.503 [2024-08-14 06:39:24.591907] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.503 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' a23c7590-c1b8-4fda-8f87-31c6fb84cd2a '!=' a23c7590-c1b8-4fda-8f87-31c6fb84cd2a ']' 00:07:57.504 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:07:57.504 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:57.504 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:57.504 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 73111 00:07:57.504 06:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 73111 ']' 00:07:57.504 06:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 73111 00:07:57.504 06:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:07:57.504 06:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:57.504 06:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73111 00:07:57.504 killing process with pid 73111 00:07:57.504 06:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:57.504 06:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:57.504 06:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73111' 00:07:57.504 06:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 73111 00:07:57.504 [2024-08-14 06:39:24.654426] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.504 06:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 73111 00:07:57.504 [2024-08-14 06:39:24.654545] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.504 [2024-08-14 06:39:24.654603] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.504 [2024-08-14 06:39:24.654623] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:57.504 [2024-08-14 06:39:24.679131] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.775 06:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:07:57.775 00:07:57.775 real 0m11.109s 00:07:57.775 user 0m20.089s 00:07:57.775 sys 0m1.658s 00:07:57.775 06:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.775 ************************************ 00:07:57.775 END TEST raid_superblock_test 00:07:57.775 ************************************ 00:07:57.775 06:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.775 06:39:24 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:57.775 06:39:24 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:57.775 06:39:24 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.775 06:39:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.775 ************************************ 00:07:57.775 START TEST raid_read_error_test 00:07:57.775 ************************************ 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 2 read 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:07:57.775 06:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:07:57.775 06:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:07:57.775 06:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.zPRbLTwex0 00:07:57.775 06:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=73461 00:07:57.775 06:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 73461 /var/tmp/spdk-raid.sock 00:07:57.775 06:39:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 73461 ']' 00:07:57.775 06:39:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:57.775 06:39:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:57.775 06:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:57.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:57.775 06:39:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:57.775 06:39:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:57.775 06:39:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.035 [2024-08-14 06:39:25.094484] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:07:58.035 [2024-08-14 06:39:25.094624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73461 ] 00:07:58.035 [2024-08-14 06:39:25.242907] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.295 [2024-08-14 06:39:25.295569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.295 [2024-08-14 06:39:25.340910] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.295 [2024-08-14 06:39:25.340953] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.863 06:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:58.863 06:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:07:58.863 06:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:58.863 06:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:59.122 BaseBdev1_malloc 00:07:59.122 06:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:59.379 true 00:07:59.379 06:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:59.637 [2024-08-14 06:39:26.706378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:59.637 [2024-08-14 06:39:26.706471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.637 [2024-08-14 06:39:26.706505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:59.637 [2024-08-14 06:39:26.706525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.638 [2024-08-14 06:39:26.709164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.638 [2024-08-14 06:39:26.709224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:59.638 BaseBdev1 00:07:59.638 06:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:59.638 06:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:59.895 BaseBdev2_malloc 00:07:59.895 06:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:00.153 true 00:08:00.153 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:00.412 [2024-08-14 06:39:27.410825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:00.412 [2024-08-14 06:39:27.410922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.412 [2024-08-14 06:39:27.410952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:00.412 [2024-08-14 06:39:27.410965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.412 [2024-08-14 06:39:27.413554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.412 BaseBdev2 00:08:00.412 [2024-08-14 06:39:27.413682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:00.412 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:00.412 [2024-08-14 06:39:27.638531] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.412 [2024-08-14 06:39:27.640844] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.412 [2024-08-14 06:39:27.641077] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:00.412 [2024-08-14 06:39:27.641097] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:00.412 [2024-08-14 06:39:27.641455] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:00.412 [2024-08-14 06:39:27.641642] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:00.412 [2024-08-14 06:39:27.641654] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:00.412 [2024-08-14 06:39:27.641837] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.671 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:00.671 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:00.671 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:00.671 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:00.671 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:00.671 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:00.671 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:00.671 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:00.671 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:00.671 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:00.671 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:00.671 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.929 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:00.929 "name": "raid_bdev1", 00:08:00.929 "uuid": "7ab6b0f1-d77e-4861-9d4b-a4673947334d", 00:08:00.929 "strip_size_kb": 64, 00:08:00.929 "state": "online", 00:08:00.929 "raid_level": "concat", 00:08:00.929 "superblock": true, 00:08:00.929 "num_base_bdevs": 2, 00:08:00.929 "num_base_bdevs_discovered": 2, 00:08:00.929 "num_base_bdevs_operational": 2, 00:08:00.929 "base_bdevs_list": [ 00:08:00.929 { 00:08:00.929 "name": "BaseBdev1", 00:08:00.929 "uuid": "8cd5462a-c995-5eb6-9578-b9eee446b585", 00:08:00.929 "is_configured": true, 00:08:00.929 "data_offset": 2048, 00:08:00.929 "data_size": 63488 00:08:00.929 }, 00:08:00.929 { 00:08:00.929 "name": "BaseBdev2", 00:08:00.929 "uuid": "553b9798-d3d8-52c2-a990-8190d4ba9e23", 00:08:00.929 "is_configured": true, 00:08:00.929 "data_offset": 2048, 00:08:00.929 "data_size": 63488 00:08:00.929 } 00:08:00.929 ] 00:08:00.929 }' 00:08:00.929 06:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:00.929 06:39:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.497 06:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:08:01.497 06:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:01.497 [2024-08-14 06:39:28.625543] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:02.438 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:02.695 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:08:02.695 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:08:02.695 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:08:02.695 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:02.695 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:02.695 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:02.695 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:02.695 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:02.695 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:02.695 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:02.695 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:02.695 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:02.695 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:02.695 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:02.695 06:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.954 06:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:02.954 "name": "raid_bdev1", 00:08:02.954 "uuid": "7ab6b0f1-d77e-4861-9d4b-a4673947334d", 00:08:02.954 "strip_size_kb": 64, 00:08:02.954 "state": "online", 00:08:02.954 "raid_level": "concat", 00:08:02.954 "superblock": true, 00:08:02.954 "num_base_bdevs": 2, 00:08:02.954 "num_base_bdevs_discovered": 2, 00:08:02.954 "num_base_bdevs_operational": 2, 00:08:02.954 "base_bdevs_list": [ 00:08:02.954 { 00:08:02.954 "name": "BaseBdev1", 00:08:02.954 "uuid": "8cd5462a-c995-5eb6-9578-b9eee446b585", 00:08:02.954 "is_configured": true, 00:08:02.954 "data_offset": 2048, 00:08:02.954 "data_size": 63488 00:08:02.954 }, 00:08:02.954 { 00:08:02.954 "name": "BaseBdev2", 00:08:02.954 "uuid": "553b9798-d3d8-52c2-a990-8190d4ba9e23", 00:08:02.954 "is_configured": true, 00:08:02.954 "data_offset": 2048, 00:08:02.954 "data_size": 63488 00:08:02.954 } 00:08:02.954 ] 00:08:02.954 }' 00:08:02.954 06:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:02.954 06:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.521 06:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:03.779 [2024-08-14 06:39:30.868903] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:03.779 [2024-08-14 06:39:30.868949] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.779 [2024-08-14 06:39:30.871839] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.779 [2024-08-14 06:39:30.871907] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.779 [2024-08-14 06:39:30.871947] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.779 [2024-08-14 06:39:30.871969] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:03.779 0 00:08:03.779 06:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 73461 00:08:03.779 06:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 73461 ']' 00:08:03.780 06:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 73461 00:08:03.780 06:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:08:03.780 06:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:03.780 06:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73461 00:08:03.780 killing process with pid 73461 00:08:03.780 06:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:03.780 06:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:03.780 06:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73461' 00:08:03.780 06:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 73461 00:08:03.780 06:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 73461 00:08:03.780 [2024-08-14 06:39:30.931088] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.780 [2024-08-14 06:39:30.947679] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.038 06:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.zPRbLTwex0 00:08:04.038 06:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:08:04.038 06:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:08:04.038 ************************************ 00:08:04.038 END TEST raid_read_error_test 00:08:04.038 ************************************ 00:08:04.038 06:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.45 00:08:04.038 06:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:08:04.038 06:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:04.038 06:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:04.038 06:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.45 != \0\.\0\0 ]] 00:08:04.038 00:08:04.038 real 0m6.206s 00:08:04.038 user 0m9.853s 00:08:04.038 sys 0m0.858s 00:08:04.038 06:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.038 06:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.038 06:39:31 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:04.038 06:39:31 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:04.038 06:39:31 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.038 06:39:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.038 ************************************ 00:08:04.038 START TEST raid_write_error_test 00:08:04.038 ************************************ 00:08:04.038 06:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 2 write 00:08:04.038 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:08:04.038 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:08:04.038 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:08:04.038 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:08:04.038 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:04.038 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:08:04.038 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:08:04.038 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.nHnOCf9NkB 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=73632 00:08:04.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 73632 /var/tmp/spdk-raid.sock 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 73632 ']' 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:04.039 06:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.298 [2024-08-14 06:39:31.370657] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:08:04.298 [2024-08-14 06:39:31.370814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73632 ] 00:08:04.298 [2024-08-14 06:39:31.519029] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.556 [2024-08-14 06:39:31.573375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.556 [2024-08-14 06:39:31.618952] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.556 [2024-08-14 06:39:31.619096] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.125 06:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:05.125 06:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:08:05.125 06:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:08:05.125 06:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:05.391 BaseBdev1_malloc 00:08:05.391 06:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:05.650 true 00:08:05.650 06:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:05.910 [2024-08-14 06:39:32.985578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:05.910 [2024-08-14 06:39:32.985775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.910 [2024-08-14 06:39:32.985818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:05.910 [2024-08-14 06:39:32.985836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.910 [2024-08-14 06:39:32.988491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.910 [2024-08-14 06:39:32.988545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:05.910 BaseBdev1 00:08:05.910 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:08:05.910 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:06.169 BaseBdev2_malloc 00:08:06.169 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:06.428 true 00:08:06.428 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:06.695 [2024-08-14 06:39:33.726073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:06.695 [2024-08-14 06:39:33.726260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.695 [2024-08-14 06:39:33.726325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:06.695 [2024-08-14 06:39:33.726366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.695 [2024-08-14 06:39:33.728953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.695 [2024-08-14 06:39:33.729056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:06.695 BaseBdev2 00:08:06.695 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:06.957 [2024-08-14 06:39:33.969741] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.957 [2024-08-14 06:39:33.972097] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.957 [2024-08-14 06:39:33.972447] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:06.957 [2024-08-14 06:39:33.972516] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:06.957 [2024-08-14 06:39:33.972870] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:06.957 [2024-08-14 06:39:33.973094] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:06.957 [2024-08-14 06:39:33.973142] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:06.957 [2024-08-14 06:39:33.973417] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.957 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:06.957 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:06.957 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:06.957 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:06.957 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:06.957 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:06.957 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:06.957 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:06.957 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:06.957 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:06.957 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:06.957 06:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.217 06:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:07.217 "name": "raid_bdev1", 00:08:07.217 "uuid": "f4a8bb4f-40e7-49e2-8b87-41553c1841bf", 00:08:07.217 "strip_size_kb": 64, 00:08:07.217 "state": "online", 00:08:07.217 "raid_level": "concat", 00:08:07.217 "superblock": true, 00:08:07.217 "num_base_bdevs": 2, 00:08:07.217 "num_base_bdevs_discovered": 2, 00:08:07.217 "num_base_bdevs_operational": 2, 00:08:07.217 "base_bdevs_list": [ 00:08:07.217 { 00:08:07.217 "name": "BaseBdev1", 00:08:07.217 "uuid": "4ed6c259-ea2c-5f1a-8f20-8e3a303efbab", 00:08:07.217 "is_configured": true, 00:08:07.217 "data_offset": 2048, 00:08:07.217 "data_size": 63488 00:08:07.217 }, 00:08:07.217 { 00:08:07.217 "name": "BaseBdev2", 00:08:07.217 "uuid": "8a3dba48-3d7b-550f-a22b-6d9586688407", 00:08:07.217 "is_configured": true, 00:08:07.217 "data_offset": 2048, 00:08:07.217 "data_size": 63488 00:08:07.217 } 00:08:07.217 ] 00:08:07.217 }' 00:08:07.217 06:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:07.217 06:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.786 06:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:07.786 06:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:08:07.786 [2024-08-14 06:39:34.961055] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:08.725 06:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:08.986 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:08:08.986 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:08:08.986 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:08:08.986 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:08.986 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:08.986 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:08.986 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:08.986 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:08.986 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:08.986 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:08.986 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:08.986 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:08.986 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:08.986 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:08.986 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.245 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:09.245 "name": "raid_bdev1", 00:08:09.245 "uuid": "f4a8bb4f-40e7-49e2-8b87-41553c1841bf", 00:08:09.245 "strip_size_kb": 64, 00:08:09.245 "state": "online", 00:08:09.245 "raid_level": "concat", 00:08:09.245 "superblock": true, 00:08:09.245 "num_base_bdevs": 2, 00:08:09.245 "num_base_bdevs_discovered": 2, 00:08:09.245 "num_base_bdevs_operational": 2, 00:08:09.245 "base_bdevs_list": [ 00:08:09.245 { 00:08:09.245 "name": "BaseBdev1", 00:08:09.245 "uuid": "4ed6c259-ea2c-5f1a-8f20-8e3a303efbab", 00:08:09.245 "is_configured": true, 00:08:09.245 "data_offset": 2048, 00:08:09.245 "data_size": 63488 00:08:09.245 }, 00:08:09.245 { 00:08:09.245 "name": "BaseBdev2", 00:08:09.245 "uuid": "8a3dba48-3d7b-550f-a22b-6d9586688407", 00:08:09.245 "is_configured": true, 00:08:09.245 "data_offset": 2048, 00:08:09.245 "data_size": 63488 00:08:09.245 } 00:08:09.245 ] 00:08:09.245 }' 00:08:09.245 06:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:09.245 06:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.813 06:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:10.073 [2024-08-14 06:39:37.272815] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:10.073 [2024-08-14 06:39:37.272956] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.073 [2024-08-14 06:39:37.275911] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.073 [2024-08-14 06:39:37.276021] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.073 [2024-08-14 06:39:37.276095] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.073 [2024-08-14 06:39:37.276150] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:10.073 0 00:08:10.073 06:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 73632 00:08:10.073 06:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 73632 ']' 00:08:10.073 06:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 73632 00:08:10.073 06:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:08:10.073 06:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:10.073 06:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73632 00:08:10.333 06:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:10.333 06:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:10.333 killing process with pid 73632 00:08:10.333 06:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73632' 00:08:10.333 06:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 73632 00:08:10.333 [2024-08-14 06:39:37.329283] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.333 06:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 73632 00:08:10.333 [2024-08-14 06:39:37.346036] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.594 06:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.nHnOCf9NkB 00:08:10.594 06:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:08:10.594 06:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:08:10.594 ************************************ 00:08:10.594 END TEST raid_write_error_test 00:08:10.594 ************************************ 00:08:10.594 06:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.43 00:08:10.594 06:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:08:10.594 06:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:10.594 06:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:10.594 06:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.43 != \0\.\0\0 ]] 00:08:10.594 00:08:10.594 real 0m6.336s 00:08:10.594 user 0m10.071s 00:08:10.594 sys 0m0.898s 00:08:10.594 06:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.594 06:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.594 06:39:37 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:08:10.594 06:39:37 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:10.595 06:39:37 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:10.595 06:39:37 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:10.595 06:39:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.595 ************************************ 00:08:10.595 START TEST raid_state_function_test 00:08:10.595 ************************************ 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 false 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:08:10.595 Process raid pid: 73801 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=73801 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 73801' 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 73801 /var/tmp/spdk-raid.sock 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 73801 ']' 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:10.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:10.595 06:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.595 [2024-08-14 06:39:37.786707] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:08:10.595 [2024-08-14 06:39:37.786951] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.854 [2024-08-14 06:39:37.921995] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.854 [2024-08-14 06:39:37.976845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.854 [2024-08-14 06:39:38.022740] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.854 [2024-08-14 06:39:38.022851] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.794 06:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:11.794 06:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:08:11.794 06:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:11.794 [2024-08-14 06:39:38.940301] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.794 [2024-08-14 06:39:38.940379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.794 [2024-08-14 06:39:38.940394] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.794 [2024-08-14 06:39:38.940404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.794 06:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:11.794 06:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:11.794 06:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:11.794 06:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:11.794 06:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:11.794 06:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:11.794 06:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:11.794 06:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:11.794 06:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:11.794 06:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:11.794 06:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:11.794 06:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.054 06:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:12.054 "name": "Existed_Raid", 00:08:12.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.054 "strip_size_kb": 0, 00:08:12.054 "state": "configuring", 00:08:12.054 "raid_level": "raid1", 00:08:12.054 "superblock": false, 00:08:12.054 "num_base_bdevs": 2, 00:08:12.054 "num_base_bdevs_discovered": 0, 00:08:12.054 "num_base_bdevs_operational": 2, 00:08:12.054 "base_bdevs_list": [ 00:08:12.054 { 00:08:12.054 "name": "BaseBdev1", 00:08:12.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.054 "is_configured": false, 00:08:12.054 "data_offset": 0, 00:08:12.054 "data_size": 0 00:08:12.054 }, 00:08:12.054 { 00:08:12.054 "name": "BaseBdev2", 00:08:12.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.054 "is_configured": false, 00:08:12.054 "data_offset": 0, 00:08:12.054 "data_size": 0 00:08:12.054 } 00:08:12.054 ] 00:08:12.054 }' 00:08:12.054 06:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:12.054 06:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.623 06:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:12.884 [2024-08-14 06:39:40.014507] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:12.884 [2024-08-14 06:39:40.014656] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:12.884 06:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:13.143 [2024-08-14 06:39:40.262280] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.144 [2024-08-14 06:39:40.262441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.144 [2024-08-14 06:39:40.262505] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.144 [2024-08-14 06:39:40.262549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.144 06:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:13.403 [2024-08-14 06:39:40.503578] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.404 BaseBdev1 00:08:13.404 06:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:13.404 06:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:08:13.404 06:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:13.404 06:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:08:13.404 06:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:13.404 06:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:13.404 06:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:13.663 06:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:13.923 [ 00:08:13.923 { 00:08:13.923 "name": "BaseBdev1", 00:08:13.923 "aliases": [ 00:08:13.923 "a80a6384-4909-4ef5-aff5-a86132e7b024" 00:08:13.923 ], 00:08:13.923 "product_name": "Malloc disk", 00:08:13.923 "block_size": 512, 00:08:13.923 "num_blocks": 65536, 00:08:13.923 "uuid": "a80a6384-4909-4ef5-aff5-a86132e7b024", 00:08:13.923 "assigned_rate_limits": { 00:08:13.923 "rw_ios_per_sec": 0, 00:08:13.923 "rw_mbytes_per_sec": 0, 00:08:13.923 "r_mbytes_per_sec": 0, 00:08:13.923 "w_mbytes_per_sec": 0 00:08:13.923 }, 00:08:13.923 "claimed": true, 00:08:13.923 "claim_type": "exclusive_write", 00:08:13.923 "zoned": false, 00:08:13.923 "supported_io_types": { 00:08:13.923 "read": true, 00:08:13.923 "write": true, 00:08:13.923 "unmap": true, 00:08:13.923 "flush": true, 00:08:13.923 "reset": true, 00:08:13.923 "nvme_admin": false, 00:08:13.923 "nvme_io": false, 00:08:13.923 "nvme_io_md": false, 00:08:13.923 "write_zeroes": true, 00:08:13.923 "zcopy": true, 00:08:13.923 "get_zone_info": false, 00:08:13.923 "zone_management": false, 00:08:13.923 "zone_append": false, 00:08:13.924 "compare": false, 00:08:13.924 "compare_and_write": false, 00:08:13.924 "abort": true, 00:08:13.924 "seek_hole": false, 00:08:13.924 "seek_data": false, 00:08:13.924 "copy": true, 00:08:13.924 "nvme_iov_md": false 00:08:13.924 }, 00:08:13.924 "memory_domains": [ 00:08:13.924 { 00:08:13.924 "dma_device_id": "system", 00:08:13.924 "dma_device_type": 1 00:08:13.924 }, 00:08:13.924 { 00:08:13.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.924 "dma_device_type": 2 00:08:13.924 } 00:08:13.924 ], 00:08:13.924 "driver_specific": {} 00:08:13.924 } 00:08:13.924 ] 00:08:13.924 06:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:08:13.924 06:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:13.924 06:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:13.924 06:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:13.924 06:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:13.924 06:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:13.924 06:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:13.924 06:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:13.924 06:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:13.924 06:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:13.924 06:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:13.924 06:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:13.924 06:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.183 06:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:14.183 "name": "Existed_Raid", 00:08:14.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.183 "strip_size_kb": 0, 00:08:14.183 "state": "configuring", 00:08:14.183 "raid_level": "raid1", 00:08:14.184 "superblock": false, 00:08:14.184 "num_base_bdevs": 2, 00:08:14.184 "num_base_bdevs_discovered": 1, 00:08:14.184 "num_base_bdevs_operational": 2, 00:08:14.184 "base_bdevs_list": [ 00:08:14.184 { 00:08:14.184 "name": "BaseBdev1", 00:08:14.184 "uuid": "a80a6384-4909-4ef5-aff5-a86132e7b024", 00:08:14.184 "is_configured": true, 00:08:14.184 "data_offset": 0, 00:08:14.184 "data_size": 65536 00:08:14.184 }, 00:08:14.184 { 00:08:14.184 "name": "BaseBdev2", 00:08:14.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.184 "is_configured": false, 00:08:14.184 "data_offset": 0, 00:08:14.184 "data_size": 0 00:08:14.184 } 00:08:14.184 ] 00:08:14.184 }' 00:08:14.184 06:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:14.184 06:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.754 06:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:15.014 [2024-08-14 06:39:42.129085] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:15.014 [2024-08-14 06:39:42.129261] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:15.014 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:15.274 [2024-08-14 06:39:42.356775] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.274 [2024-08-14 06:39:42.358964] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.274 [2024-08-14 06:39:42.359020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.274 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:15.274 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:15.274 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:15.274 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:15.274 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:15.274 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:15.274 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:15.274 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:15.274 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:15.274 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:15.274 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:15.274 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:15.274 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.274 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:15.534 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:15.534 "name": "Existed_Raid", 00:08:15.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.534 "strip_size_kb": 0, 00:08:15.534 "state": "configuring", 00:08:15.534 "raid_level": "raid1", 00:08:15.534 "superblock": false, 00:08:15.534 "num_base_bdevs": 2, 00:08:15.534 "num_base_bdevs_discovered": 1, 00:08:15.534 "num_base_bdevs_operational": 2, 00:08:15.534 "base_bdevs_list": [ 00:08:15.534 { 00:08:15.534 "name": "BaseBdev1", 00:08:15.534 "uuid": "a80a6384-4909-4ef5-aff5-a86132e7b024", 00:08:15.534 "is_configured": true, 00:08:15.534 "data_offset": 0, 00:08:15.534 "data_size": 65536 00:08:15.534 }, 00:08:15.534 { 00:08:15.534 "name": "BaseBdev2", 00:08:15.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.534 "is_configured": false, 00:08:15.534 "data_offset": 0, 00:08:15.534 "data_size": 0 00:08:15.534 } 00:08:15.534 ] 00:08:15.534 }' 00:08:15.534 06:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:15.534 06:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.109 06:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:16.369 [2024-08-14 06:39:43.522937] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.369 [2024-08-14 06:39:43.523100] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:16.369 [2024-08-14 06:39:43.523138] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:16.369 [2024-08-14 06:39:43.523580] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:16.370 [2024-08-14 06:39:43.523815] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:16.370 [2024-08-14 06:39:43.523868] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:16.370 [2024-08-14 06:39:43.524201] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.370 BaseBdev2 00:08:16.370 06:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:16.370 06:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:08:16.370 06:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:16.370 06:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:08:16.370 06:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:16.370 06:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:16.370 06:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:16.631 06:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:16.891 [ 00:08:16.891 { 00:08:16.891 "name": "BaseBdev2", 00:08:16.891 "aliases": [ 00:08:16.891 "b42e9ed5-0a78-4ea5-be0f-63bae237e976" 00:08:16.891 ], 00:08:16.891 "product_name": "Malloc disk", 00:08:16.891 "block_size": 512, 00:08:16.891 "num_blocks": 65536, 00:08:16.891 "uuid": "b42e9ed5-0a78-4ea5-be0f-63bae237e976", 00:08:16.891 "assigned_rate_limits": { 00:08:16.891 "rw_ios_per_sec": 0, 00:08:16.891 "rw_mbytes_per_sec": 0, 00:08:16.891 "r_mbytes_per_sec": 0, 00:08:16.891 "w_mbytes_per_sec": 0 00:08:16.891 }, 00:08:16.891 "claimed": true, 00:08:16.891 "claim_type": "exclusive_write", 00:08:16.891 "zoned": false, 00:08:16.891 "supported_io_types": { 00:08:16.891 "read": true, 00:08:16.891 "write": true, 00:08:16.891 "unmap": true, 00:08:16.891 "flush": true, 00:08:16.891 "reset": true, 00:08:16.891 "nvme_admin": false, 00:08:16.891 "nvme_io": false, 00:08:16.891 "nvme_io_md": false, 00:08:16.891 "write_zeroes": true, 00:08:16.891 "zcopy": true, 00:08:16.891 "get_zone_info": false, 00:08:16.891 "zone_management": false, 00:08:16.891 "zone_append": false, 00:08:16.891 "compare": false, 00:08:16.891 "compare_and_write": false, 00:08:16.891 "abort": true, 00:08:16.891 "seek_hole": false, 00:08:16.891 "seek_data": false, 00:08:16.891 "copy": true, 00:08:16.891 "nvme_iov_md": false 00:08:16.891 }, 00:08:16.891 "memory_domains": [ 00:08:16.891 { 00:08:16.891 "dma_device_id": "system", 00:08:16.891 "dma_device_type": 1 00:08:16.891 }, 00:08:16.891 { 00:08:16.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.891 "dma_device_type": 2 00:08:16.891 } 00:08:16.891 ], 00:08:16.891 "driver_specific": {} 00:08:16.891 } 00:08:16.891 ] 00:08:16.891 06:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:08:16.891 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:16.891 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:16.891 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:16.891 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:16.891 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:16.891 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:16.891 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:16.891 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:16.891 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:16.891 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:16.891 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:16.891 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:16.891 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:16.891 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.151 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:17.151 "name": "Existed_Raid", 00:08:17.151 "uuid": "0cd71c98-32ea-4158-814e-9098cc047b2f", 00:08:17.151 "strip_size_kb": 0, 00:08:17.151 "state": "online", 00:08:17.151 "raid_level": "raid1", 00:08:17.151 "superblock": false, 00:08:17.151 "num_base_bdevs": 2, 00:08:17.151 "num_base_bdevs_discovered": 2, 00:08:17.151 "num_base_bdevs_operational": 2, 00:08:17.151 "base_bdevs_list": [ 00:08:17.151 { 00:08:17.151 "name": "BaseBdev1", 00:08:17.151 "uuid": "a80a6384-4909-4ef5-aff5-a86132e7b024", 00:08:17.151 "is_configured": true, 00:08:17.151 "data_offset": 0, 00:08:17.151 "data_size": 65536 00:08:17.151 }, 00:08:17.151 { 00:08:17.151 "name": "BaseBdev2", 00:08:17.151 "uuid": "b42e9ed5-0a78-4ea5-be0f-63bae237e976", 00:08:17.151 "is_configured": true, 00:08:17.151 "data_offset": 0, 00:08:17.151 "data_size": 65536 00:08:17.151 } 00:08:17.151 ] 00:08:17.151 }' 00:08:17.151 06:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:17.151 06:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.091 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:18.091 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:18.091 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:18.091 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:18.092 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:18.092 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:18.092 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:18.092 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:18.092 [2024-08-14 06:39:45.248775] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.092 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:18.092 "name": "Existed_Raid", 00:08:18.092 "aliases": [ 00:08:18.092 "0cd71c98-32ea-4158-814e-9098cc047b2f" 00:08:18.092 ], 00:08:18.092 "product_name": "Raid Volume", 00:08:18.092 "block_size": 512, 00:08:18.092 "num_blocks": 65536, 00:08:18.092 "uuid": "0cd71c98-32ea-4158-814e-9098cc047b2f", 00:08:18.092 "assigned_rate_limits": { 00:08:18.092 "rw_ios_per_sec": 0, 00:08:18.092 "rw_mbytes_per_sec": 0, 00:08:18.092 "r_mbytes_per_sec": 0, 00:08:18.092 "w_mbytes_per_sec": 0 00:08:18.092 }, 00:08:18.092 "claimed": false, 00:08:18.092 "zoned": false, 00:08:18.092 "supported_io_types": { 00:08:18.092 "read": true, 00:08:18.092 "write": true, 00:08:18.092 "unmap": false, 00:08:18.092 "flush": false, 00:08:18.092 "reset": true, 00:08:18.092 "nvme_admin": false, 00:08:18.092 "nvme_io": false, 00:08:18.092 "nvme_io_md": false, 00:08:18.092 "write_zeroes": true, 00:08:18.092 "zcopy": false, 00:08:18.092 "get_zone_info": false, 00:08:18.092 "zone_management": false, 00:08:18.092 "zone_append": false, 00:08:18.092 "compare": false, 00:08:18.092 "compare_and_write": false, 00:08:18.092 "abort": false, 00:08:18.092 "seek_hole": false, 00:08:18.092 "seek_data": false, 00:08:18.092 "copy": false, 00:08:18.092 "nvme_iov_md": false 00:08:18.092 }, 00:08:18.092 "memory_domains": [ 00:08:18.092 { 00:08:18.092 "dma_device_id": "system", 00:08:18.092 "dma_device_type": 1 00:08:18.092 }, 00:08:18.092 { 00:08:18.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.092 "dma_device_type": 2 00:08:18.092 }, 00:08:18.092 { 00:08:18.092 "dma_device_id": "system", 00:08:18.092 "dma_device_type": 1 00:08:18.092 }, 00:08:18.092 { 00:08:18.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.092 "dma_device_type": 2 00:08:18.092 } 00:08:18.092 ], 00:08:18.092 "driver_specific": { 00:08:18.092 "raid": { 00:08:18.092 "uuid": "0cd71c98-32ea-4158-814e-9098cc047b2f", 00:08:18.092 "strip_size_kb": 0, 00:08:18.092 "state": "online", 00:08:18.092 "raid_level": "raid1", 00:08:18.092 "superblock": false, 00:08:18.092 "num_base_bdevs": 2, 00:08:18.092 "num_base_bdevs_discovered": 2, 00:08:18.092 "num_base_bdevs_operational": 2, 00:08:18.092 "base_bdevs_list": [ 00:08:18.092 { 00:08:18.092 "name": "BaseBdev1", 00:08:18.092 "uuid": "a80a6384-4909-4ef5-aff5-a86132e7b024", 00:08:18.092 "is_configured": true, 00:08:18.092 "data_offset": 0, 00:08:18.092 "data_size": 65536 00:08:18.092 }, 00:08:18.092 { 00:08:18.092 "name": "BaseBdev2", 00:08:18.092 "uuid": "b42e9ed5-0a78-4ea5-be0f-63bae237e976", 00:08:18.092 "is_configured": true, 00:08:18.092 "data_offset": 0, 00:08:18.092 "data_size": 65536 00:08:18.092 } 00:08:18.092 ] 00:08:18.092 } 00:08:18.092 } 00:08:18.092 }' 00:08:18.092 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:18.092 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:18.092 BaseBdev2' 00:08:18.092 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:18.092 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:18.092 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:18.352 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:18.352 "name": "BaseBdev1", 00:08:18.352 "aliases": [ 00:08:18.352 "a80a6384-4909-4ef5-aff5-a86132e7b024" 00:08:18.352 ], 00:08:18.352 "product_name": "Malloc disk", 00:08:18.352 "block_size": 512, 00:08:18.352 "num_blocks": 65536, 00:08:18.352 "uuid": "a80a6384-4909-4ef5-aff5-a86132e7b024", 00:08:18.352 "assigned_rate_limits": { 00:08:18.352 "rw_ios_per_sec": 0, 00:08:18.352 "rw_mbytes_per_sec": 0, 00:08:18.352 "r_mbytes_per_sec": 0, 00:08:18.352 "w_mbytes_per_sec": 0 00:08:18.352 }, 00:08:18.352 "claimed": true, 00:08:18.352 "claim_type": "exclusive_write", 00:08:18.352 "zoned": false, 00:08:18.352 "supported_io_types": { 00:08:18.352 "read": true, 00:08:18.352 "write": true, 00:08:18.352 "unmap": true, 00:08:18.352 "flush": true, 00:08:18.352 "reset": true, 00:08:18.352 "nvme_admin": false, 00:08:18.352 "nvme_io": false, 00:08:18.352 "nvme_io_md": false, 00:08:18.352 "write_zeroes": true, 00:08:18.352 "zcopy": true, 00:08:18.352 "get_zone_info": false, 00:08:18.352 "zone_management": false, 00:08:18.352 "zone_append": false, 00:08:18.352 "compare": false, 00:08:18.352 "compare_and_write": false, 00:08:18.352 "abort": true, 00:08:18.352 "seek_hole": false, 00:08:18.352 "seek_data": false, 00:08:18.352 "copy": true, 00:08:18.352 "nvme_iov_md": false 00:08:18.352 }, 00:08:18.352 "memory_domains": [ 00:08:18.352 { 00:08:18.352 "dma_device_id": "system", 00:08:18.352 "dma_device_type": 1 00:08:18.352 }, 00:08:18.352 { 00:08:18.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.352 "dma_device_type": 2 00:08:18.352 } 00:08:18.352 ], 00:08:18.352 "driver_specific": {} 00:08:18.352 }' 00:08:18.352 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:18.611 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:18.611 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:18.611 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:18.611 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:18.611 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:18.611 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:18.611 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:18.611 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:18.611 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:18.871 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:18.871 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:18.871 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:18.871 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:18.871 06:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:19.133 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:19.133 "name": "BaseBdev2", 00:08:19.133 "aliases": [ 00:08:19.133 "b42e9ed5-0a78-4ea5-be0f-63bae237e976" 00:08:19.133 ], 00:08:19.133 "product_name": "Malloc disk", 00:08:19.133 "block_size": 512, 00:08:19.133 "num_blocks": 65536, 00:08:19.133 "uuid": "b42e9ed5-0a78-4ea5-be0f-63bae237e976", 00:08:19.133 "assigned_rate_limits": { 00:08:19.133 "rw_ios_per_sec": 0, 00:08:19.133 "rw_mbytes_per_sec": 0, 00:08:19.133 "r_mbytes_per_sec": 0, 00:08:19.133 "w_mbytes_per_sec": 0 00:08:19.133 }, 00:08:19.133 "claimed": true, 00:08:19.133 "claim_type": "exclusive_write", 00:08:19.133 "zoned": false, 00:08:19.133 "supported_io_types": { 00:08:19.133 "read": true, 00:08:19.133 "write": true, 00:08:19.133 "unmap": true, 00:08:19.133 "flush": true, 00:08:19.133 "reset": true, 00:08:19.133 "nvme_admin": false, 00:08:19.133 "nvme_io": false, 00:08:19.133 "nvme_io_md": false, 00:08:19.133 "write_zeroes": true, 00:08:19.133 "zcopy": true, 00:08:19.133 "get_zone_info": false, 00:08:19.133 "zone_management": false, 00:08:19.133 "zone_append": false, 00:08:19.133 "compare": false, 00:08:19.133 "compare_and_write": false, 00:08:19.133 "abort": true, 00:08:19.133 "seek_hole": false, 00:08:19.133 "seek_data": false, 00:08:19.133 "copy": true, 00:08:19.133 "nvme_iov_md": false 00:08:19.133 }, 00:08:19.133 "memory_domains": [ 00:08:19.133 { 00:08:19.133 "dma_device_id": "system", 00:08:19.133 "dma_device_type": 1 00:08:19.133 }, 00:08:19.133 { 00:08:19.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.133 "dma_device_type": 2 00:08:19.133 } 00:08:19.133 ], 00:08:19.133 "driver_specific": {} 00:08:19.133 }' 00:08:19.133 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:19.133 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:19.133 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:19.133 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:19.133 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:19.418 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:19.418 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:19.418 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:19.419 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:19.419 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:19.419 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:19.419 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:19.419 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:19.715 [2024-08-14 06:39:46.815265] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:19.715 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:19.715 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:19.715 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:19.715 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:19.715 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:19.715 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:19.715 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:19.715 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:19.715 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:19.715 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:19.715 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:19.715 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:19.715 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:19.715 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:19.715 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:19.716 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.716 06:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:19.974 06:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:19.974 "name": "Existed_Raid", 00:08:19.974 "uuid": "0cd71c98-32ea-4158-814e-9098cc047b2f", 00:08:19.974 "strip_size_kb": 0, 00:08:19.974 "state": "online", 00:08:19.974 "raid_level": "raid1", 00:08:19.974 "superblock": false, 00:08:19.974 "num_base_bdevs": 2, 00:08:19.974 "num_base_bdevs_discovered": 1, 00:08:19.974 "num_base_bdevs_operational": 1, 00:08:19.974 "base_bdevs_list": [ 00:08:19.974 { 00:08:19.974 "name": null, 00:08:19.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.974 "is_configured": false, 00:08:19.974 "data_offset": 0, 00:08:19.974 "data_size": 65536 00:08:19.974 }, 00:08:19.974 { 00:08:19.974 "name": "BaseBdev2", 00:08:19.974 "uuid": "b42e9ed5-0a78-4ea5-be0f-63bae237e976", 00:08:19.974 "is_configured": true, 00:08:19.974 "data_offset": 0, 00:08:19.974 "data_size": 65536 00:08:19.974 } 00:08:19.974 ] 00:08:19.974 }' 00:08:19.974 06:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:19.974 06:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.544 06:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:20.544 06:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:20.544 06:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:20.544 06:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:20.803 06:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:20.804 06:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:20.804 06:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:21.063 [2024-08-14 06:39:48.245133] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:21.063 [2024-08-14 06:39:48.245381] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.063 [2024-08-14 06:39:48.257801] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.063 [2024-08-14 06:39:48.257871] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.063 [2024-08-14 06:39:48.257890] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:21.063 06:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:21.063 06:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:21.063 06:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:21.063 06:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:21.323 06:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:21.323 06:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:21.323 06:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:21.323 06:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 73801 00:08:21.323 06:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 73801 ']' 00:08:21.323 06:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 73801 00:08:21.323 06:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:08:21.323 06:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:21.323 06:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73801 00:08:21.583 06:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:21.583 06:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:21.583 06:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73801' 00:08:21.583 killing process with pid 73801 00:08:21.583 06:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 73801 00:08:21.583 [2024-08-14 06:39:48.594694] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.583 06:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 73801 00:08:21.583 [2024-08-14 06:39:48.595819] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:08:21.842 00:08:21.842 real 0m11.177s 00:08:21.842 user 0m20.115s 00:08:21.842 sys 0m1.682s 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.842 ************************************ 00:08:21.842 END TEST raid_state_function_test 00:08:21.842 ************************************ 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.842 06:39:48 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:21.842 06:39:48 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:21.842 06:39:48 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.842 06:39:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.842 ************************************ 00:08:21.842 START TEST raid_state_function_test_sb 00:08:21.842 ************************************ 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:21.842 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=74163 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 74163' 00:08:21.843 Process raid pid: 74163 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 74163 /var/tmp/spdk-raid.sock 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 74163 ']' 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:21.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:21.843 06:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.843 [2024-08-14 06:39:49.020882] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:08:21.843 [2024-08-14 06:39:49.021128] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.103 [2024-08-14 06:39:49.171399] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.103 [2024-08-14 06:39:49.225974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.103 [2024-08-14 06:39:49.271340] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.103 [2024-08-14 06:39:49.271399] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.043 06:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:23.043 06:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:08:23.043 06:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:23.043 [2024-08-14 06:39:50.224367] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.043 [2024-08-14 06:39:50.224439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.043 [2024-08-14 06:39:50.224456] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.043 [2024-08-14 06:39:50.224466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.043 06:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:23.043 06:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:23.043 06:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:23.043 06:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:23.043 06:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:23.043 06:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:23.043 06:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:23.043 06:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:23.043 06:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:23.043 06:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:23.043 06:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:23.043 06:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.302 06:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:23.302 "name": "Existed_Raid", 00:08:23.302 "uuid": "faebc220-7768-4bc6-9127-ad770c58c521", 00:08:23.302 "strip_size_kb": 0, 00:08:23.302 "state": "configuring", 00:08:23.302 "raid_level": "raid1", 00:08:23.302 "superblock": true, 00:08:23.302 "num_base_bdevs": 2, 00:08:23.302 "num_base_bdevs_discovered": 0, 00:08:23.302 "num_base_bdevs_operational": 2, 00:08:23.302 "base_bdevs_list": [ 00:08:23.302 { 00:08:23.302 "name": "BaseBdev1", 00:08:23.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.302 "is_configured": false, 00:08:23.302 "data_offset": 0, 00:08:23.302 "data_size": 0 00:08:23.302 }, 00:08:23.302 { 00:08:23.302 "name": "BaseBdev2", 00:08:23.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.302 "is_configured": false, 00:08:23.302 "data_offset": 0, 00:08:23.302 "data_size": 0 00:08:23.302 } 00:08:23.302 ] 00:08:23.302 }' 00:08:23.302 06:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:23.302 06:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.241 06:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:24.241 [2024-08-14 06:39:51.466374] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:24.241 [2024-08-14 06:39:51.466435] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:24.241 06:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:24.501 [2024-08-14 06:39:51.729885] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.501 [2024-08-14 06:39:51.730010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.501 [2024-08-14 06:39:51.730044] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.501 [2024-08-14 06:39:51.730055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.501 06:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:24.760 [2024-08-14 06:39:51.975193] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.760 BaseBdev1 00:08:24.760 06:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:24.760 06:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:08:24.760 06:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:24.760 06:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:08:24.760 06:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:24.760 06:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:24.760 06:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:25.019 06:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:25.278 [ 00:08:25.278 { 00:08:25.278 "name": "BaseBdev1", 00:08:25.278 "aliases": [ 00:08:25.278 "1997a3e9-c6e8-471f-96ef-6a21190f8ec2" 00:08:25.278 ], 00:08:25.278 "product_name": "Malloc disk", 00:08:25.278 "block_size": 512, 00:08:25.278 "num_blocks": 65536, 00:08:25.278 "uuid": "1997a3e9-c6e8-471f-96ef-6a21190f8ec2", 00:08:25.278 "assigned_rate_limits": { 00:08:25.278 "rw_ios_per_sec": 0, 00:08:25.278 "rw_mbytes_per_sec": 0, 00:08:25.278 "r_mbytes_per_sec": 0, 00:08:25.278 "w_mbytes_per_sec": 0 00:08:25.278 }, 00:08:25.278 "claimed": true, 00:08:25.278 "claim_type": "exclusive_write", 00:08:25.278 "zoned": false, 00:08:25.278 "supported_io_types": { 00:08:25.278 "read": true, 00:08:25.278 "write": true, 00:08:25.278 "unmap": true, 00:08:25.278 "flush": true, 00:08:25.278 "reset": true, 00:08:25.278 "nvme_admin": false, 00:08:25.278 "nvme_io": false, 00:08:25.278 "nvme_io_md": false, 00:08:25.278 "write_zeroes": true, 00:08:25.278 "zcopy": true, 00:08:25.278 "get_zone_info": false, 00:08:25.278 "zone_management": false, 00:08:25.278 "zone_append": false, 00:08:25.278 "compare": false, 00:08:25.278 "compare_and_write": false, 00:08:25.278 "abort": true, 00:08:25.278 "seek_hole": false, 00:08:25.278 "seek_data": false, 00:08:25.278 "copy": true, 00:08:25.278 "nvme_iov_md": false 00:08:25.278 }, 00:08:25.278 "memory_domains": [ 00:08:25.278 { 00:08:25.278 "dma_device_id": "system", 00:08:25.278 "dma_device_type": 1 00:08:25.278 }, 00:08:25.278 { 00:08:25.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.278 "dma_device_type": 2 00:08:25.278 } 00:08:25.278 ], 00:08:25.278 "driver_specific": {} 00:08:25.278 } 00:08:25.278 ] 00:08:25.278 06:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:08:25.278 06:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:25.278 06:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:25.278 06:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:25.278 06:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:25.278 06:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:25.278 06:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:25.278 06:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:25.279 06:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:25.279 06:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:25.279 06:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:25.279 06:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:25.279 06:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.537 06:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:25.537 "name": "Existed_Raid", 00:08:25.537 "uuid": "eddd389f-fbfa-4066-b356-0568a5021024", 00:08:25.537 "strip_size_kb": 0, 00:08:25.537 "state": "configuring", 00:08:25.537 "raid_level": "raid1", 00:08:25.537 "superblock": true, 00:08:25.537 "num_base_bdevs": 2, 00:08:25.537 "num_base_bdevs_discovered": 1, 00:08:25.537 "num_base_bdevs_operational": 2, 00:08:25.537 "base_bdevs_list": [ 00:08:25.537 { 00:08:25.537 "name": "BaseBdev1", 00:08:25.537 "uuid": "1997a3e9-c6e8-471f-96ef-6a21190f8ec2", 00:08:25.537 "is_configured": true, 00:08:25.537 "data_offset": 2048, 00:08:25.537 "data_size": 63488 00:08:25.537 }, 00:08:25.537 { 00:08:25.537 "name": "BaseBdev2", 00:08:25.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.537 "is_configured": false, 00:08:25.537 "data_offset": 0, 00:08:25.537 "data_size": 0 00:08:25.537 } 00:08:25.537 ] 00:08:25.537 }' 00:08:25.537 06:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:25.537 06:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.104 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:26.363 [2024-08-14 06:39:53.588649] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:26.363 [2024-08-14 06:39:53.588820] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:26.363 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:26.622 [2024-08-14 06:39:53.820512] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.622 [2024-08-14 06:39:53.822794] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.622 [2024-08-14 06:39:53.822921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.622 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:26.622 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:26.622 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:26.622 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:26.622 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:26.622 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:26.622 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:26.622 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:26.622 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:26.622 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:26.622 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:26.622 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:26.622 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:26.622 06:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.190 06:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:27.190 "name": "Existed_Raid", 00:08:27.190 "uuid": "6f323e16-6938-4f89-a816-d98be44b9c55", 00:08:27.190 "strip_size_kb": 0, 00:08:27.190 "state": "configuring", 00:08:27.190 "raid_level": "raid1", 00:08:27.190 "superblock": true, 00:08:27.190 "num_base_bdevs": 2, 00:08:27.190 "num_base_bdevs_discovered": 1, 00:08:27.190 "num_base_bdevs_operational": 2, 00:08:27.190 "base_bdevs_list": [ 00:08:27.190 { 00:08:27.190 "name": "BaseBdev1", 00:08:27.190 "uuid": "1997a3e9-c6e8-471f-96ef-6a21190f8ec2", 00:08:27.190 "is_configured": true, 00:08:27.190 "data_offset": 2048, 00:08:27.190 "data_size": 63488 00:08:27.190 }, 00:08:27.190 { 00:08:27.190 "name": "BaseBdev2", 00:08:27.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.190 "is_configured": false, 00:08:27.190 "data_offset": 0, 00:08:27.190 "data_size": 0 00:08:27.190 } 00:08:27.190 ] 00:08:27.190 }' 00:08:27.190 06:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:27.190 06:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.763 06:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:27.763 [2024-08-14 06:39:54.981842] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.763 [2024-08-14 06:39:54.982076] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:27.763 [2024-08-14 06:39:54.982111] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:27.763 [2024-08-14 06:39:54.982449] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:27.763 BaseBdev2 00:08:27.763 [2024-08-14 06:39:54.982692] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:27.763 [2024-08-14 06:39:54.982710] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:27.763 [2024-08-14 06:39:54.982863] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.763 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:27.763 06:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:08:27.763 06:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:27.763 06:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:08:27.763 06:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:27.763 06:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:27.763 06:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:28.029 06:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:28.290 [ 00:08:28.290 { 00:08:28.290 "name": "BaseBdev2", 00:08:28.290 "aliases": [ 00:08:28.290 "1b3e32a6-51ff-420a-a503-bd7031725953" 00:08:28.290 ], 00:08:28.290 "product_name": "Malloc disk", 00:08:28.290 "block_size": 512, 00:08:28.290 "num_blocks": 65536, 00:08:28.290 "uuid": "1b3e32a6-51ff-420a-a503-bd7031725953", 00:08:28.290 "assigned_rate_limits": { 00:08:28.290 "rw_ios_per_sec": 0, 00:08:28.290 "rw_mbytes_per_sec": 0, 00:08:28.290 "r_mbytes_per_sec": 0, 00:08:28.290 "w_mbytes_per_sec": 0 00:08:28.290 }, 00:08:28.290 "claimed": true, 00:08:28.290 "claim_type": "exclusive_write", 00:08:28.290 "zoned": false, 00:08:28.290 "supported_io_types": { 00:08:28.290 "read": true, 00:08:28.290 "write": true, 00:08:28.290 "unmap": true, 00:08:28.290 "flush": true, 00:08:28.290 "reset": true, 00:08:28.290 "nvme_admin": false, 00:08:28.290 "nvme_io": false, 00:08:28.290 "nvme_io_md": false, 00:08:28.290 "write_zeroes": true, 00:08:28.290 "zcopy": true, 00:08:28.290 "get_zone_info": false, 00:08:28.290 "zone_management": false, 00:08:28.290 "zone_append": false, 00:08:28.290 "compare": false, 00:08:28.290 "compare_and_write": false, 00:08:28.290 "abort": true, 00:08:28.290 "seek_hole": false, 00:08:28.290 "seek_data": false, 00:08:28.290 "copy": true, 00:08:28.290 "nvme_iov_md": false 00:08:28.290 }, 00:08:28.290 "memory_domains": [ 00:08:28.290 { 00:08:28.290 "dma_device_id": "system", 00:08:28.290 "dma_device_type": 1 00:08:28.290 }, 00:08:28.290 { 00:08:28.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.290 "dma_device_type": 2 00:08:28.290 } 00:08:28.290 ], 00:08:28.290 "driver_specific": {} 00:08:28.290 } 00:08:28.290 ] 00:08:28.290 06:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:08:28.290 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:28.290 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:28.290 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:28.290 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:28.290 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:28.290 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:28.290 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:28.290 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:28.290 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:28.290 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:28.290 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:28.290 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:28.290 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.290 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.550 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:28.550 "name": "Existed_Raid", 00:08:28.550 "uuid": "6f323e16-6938-4f89-a816-d98be44b9c55", 00:08:28.550 "strip_size_kb": 0, 00:08:28.550 "state": "online", 00:08:28.550 "raid_level": "raid1", 00:08:28.550 "superblock": true, 00:08:28.550 "num_base_bdevs": 2, 00:08:28.550 "num_base_bdevs_discovered": 2, 00:08:28.550 "num_base_bdevs_operational": 2, 00:08:28.550 "base_bdevs_list": [ 00:08:28.550 { 00:08:28.550 "name": "BaseBdev1", 00:08:28.550 "uuid": "1997a3e9-c6e8-471f-96ef-6a21190f8ec2", 00:08:28.550 "is_configured": true, 00:08:28.550 "data_offset": 2048, 00:08:28.550 "data_size": 63488 00:08:28.550 }, 00:08:28.550 { 00:08:28.550 "name": "BaseBdev2", 00:08:28.550 "uuid": "1b3e32a6-51ff-420a-a503-bd7031725953", 00:08:28.550 "is_configured": true, 00:08:28.550 "data_offset": 2048, 00:08:28.550 "data_size": 63488 00:08:28.550 } 00:08:28.550 ] 00:08:28.550 }' 00:08:28.550 06:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:28.550 06:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.491 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:29.491 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:29.491 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:29.491 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:29.491 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:29.491 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:08:29.491 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:29.491 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:29.491 [2024-08-14 06:39:56.599665] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.491 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:29.491 "name": "Existed_Raid", 00:08:29.491 "aliases": [ 00:08:29.491 "6f323e16-6938-4f89-a816-d98be44b9c55" 00:08:29.491 ], 00:08:29.491 "product_name": "Raid Volume", 00:08:29.491 "block_size": 512, 00:08:29.491 "num_blocks": 63488, 00:08:29.491 "uuid": "6f323e16-6938-4f89-a816-d98be44b9c55", 00:08:29.491 "assigned_rate_limits": { 00:08:29.491 "rw_ios_per_sec": 0, 00:08:29.491 "rw_mbytes_per_sec": 0, 00:08:29.491 "r_mbytes_per_sec": 0, 00:08:29.491 "w_mbytes_per_sec": 0 00:08:29.491 }, 00:08:29.491 "claimed": false, 00:08:29.491 "zoned": false, 00:08:29.491 "supported_io_types": { 00:08:29.491 "read": true, 00:08:29.491 "write": true, 00:08:29.491 "unmap": false, 00:08:29.491 "flush": false, 00:08:29.491 "reset": true, 00:08:29.491 "nvme_admin": false, 00:08:29.491 "nvme_io": false, 00:08:29.491 "nvme_io_md": false, 00:08:29.491 "write_zeroes": true, 00:08:29.491 "zcopy": false, 00:08:29.491 "get_zone_info": false, 00:08:29.491 "zone_management": false, 00:08:29.491 "zone_append": false, 00:08:29.491 "compare": false, 00:08:29.491 "compare_and_write": false, 00:08:29.491 "abort": false, 00:08:29.491 "seek_hole": false, 00:08:29.491 "seek_data": false, 00:08:29.491 "copy": false, 00:08:29.491 "nvme_iov_md": false 00:08:29.491 }, 00:08:29.491 "memory_domains": [ 00:08:29.491 { 00:08:29.491 "dma_device_id": "system", 00:08:29.491 "dma_device_type": 1 00:08:29.491 }, 00:08:29.491 { 00:08:29.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.491 "dma_device_type": 2 00:08:29.491 }, 00:08:29.491 { 00:08:29.491 "dma_device_id": "system", 00:08:29.491 "dma_device_type": 1 00:08:29.491 }, 00:08:29.491 { 00:08:29.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.491 "dma_device_type": 2 00:08:29.491 } 00:08:29.491 ], 00:08:29.491 "driver_specific": { 00:08:29.491 "raid": { 00:08:29.491 "uuid": "6f323e16-6938-4f89-a816-d98be44b9c55", 00:08:29.491 "strip_size_kb": 0, 00:08:29.491 "state": "online", 00:08:29.491 "raid_level": "raid1", 00:08:29.491 "superblock": true, 00:08:29.491 "num_base_bdevs": 2, 00:08:29.491 "num_base_bdevs_discovered": 2, 00:08:29.491 "num_base_bdevs_operational": 2, 00:08:29.491 "base_bdevs_list": [ 00:08:29.491 { 00:08:29.491 "name": "BaseBdev1", 00:08:29.491 "uuid": "1997a3e9-c6e8-471f-96ef-6a21190f8ec2", 00:08:29.491 "is_configured": true, 00:08:29.491 "data_offset": 2048, 00:08:29.491 "data_size": 63488 00:08:29.491 }, 00:08:29.491 { 00:08:29.491 "name": "BaseBdev2", 00:08:29.491 "uuid": "1b3e32a6-51ff-420a-a503-bd7031725953", 00:08:29.491 "is_configured": true, 00:08:29.491 "data_offset": 2048, 00:08:29.491 "data_size": 63488 00:08:29.491 } 00:08:29.491 ] 00:08:29.491 } 00:08:29.491 } 00:08:29.491 }' 00:08:29.491 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.491 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:29.491 BaseBdev2' 00:08:29.491 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:29.491 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:29.491 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:29.752 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:29.752 "name": "BaseBdev1", 00:08:29.752 "aliases": [ 00:08:29.752 "1997a3e9-c6e8-471f-96ef-6a21190f8ec2" 00:08:29.752 ], 00:08:29.752 "product_name": "Malloc disk", 00:08:29.752 "block_size": 512, 00:08:29.752 "num_blocks": 65536, 00:08:29.752 "uuid": "1997a3e9-c6e8-471f-96ef-6a21190f8ec2", 00:08:29.752 "assigned_rate_limits": { 00:08:29.752 "rw_ios_per_sec": 0, 00:08:29.773 "rw_mbytes_per_sec": 0, 00:08:29.773 "r_mbytes_per_sec": 0, 00:08:29.773 "w_mbytes_per_sec": 0 00:08:29.773 }, 00:08:29.773 "claimed": true, 00:08:29.773 "claim_type": "exclusive_write", 00:08:29.773 "zoned": false, 00:08:29.773 "supported_io_types": { 00:08:29.773 "read": true, 00:08:29.773 "write": true, 00:08:29.773 "unmap": true, 00:08:29.773 "flush": true, 00:08:29.773 "reset": true, 00:08:29.773 "nvme_admin": false, 00:08:29.773 "nvme_io": false, 00:08:29.773 "nvme_io_md": false, 00:08:29.773 "write_zeroes": true, 00:08:29.773 "zcopy": true, 00:08:29.773 "get_zone_info": false, 00:08:29.773 "zone_management": false, 00:08:29.773 "zone_append": false, 00:08:29.773 "compare": false, 00:08:29.773 "compare_and_write": false, 00:08:29.773 "abort": true, 00:08:29.773 "seek_hole": false, 00:08:29.773 "seek_data": false, 00:08:29.773 "copy": true, 00:08:29.773 "nvme_iov_md": false 00:08:29.773 }, 00:08:29.773 "memory_domains": [ 00:08:29.773 { 00:08:29.773 "dma_device_id": "system", 00:08:29.773 "dma_device_type": 1 00:08:29.773 }, 00:08:29.773 { 00:08:29.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.773 "dma_device_type": 2 00:08:29.773 } 00:08:29.773 ], 00:08:29.773 "driver_specific": {} 00:08:29.773 }' 00:08:29.773 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:29.773 06:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:30.033 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:30.033 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:30.033 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:30.033 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:30.033 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:30.033 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:30.033 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:30.033 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:30.033 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:30.292 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:30.292 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:30.293 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:30.293 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:30.553 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:30.553 "name": "BaseBdev2", 00:08:30.553 "aliases": [ 00:08:30.553 "1b3e32a6-51ff-420a-a503-bd7031725953" 00:08:30.553 ], 00:08:30.553 "product_name": "Malloc disk", 00:08:30.553 "block_size": 512, 00:08:30.553 "num_blocks": 65536, 00:08:30.553 "uuid": "1b3e32a6-51ff-420a-a503-bd7031725953", 00:08:30.553 "assigned_rate_limits": { 00:08:30.553 "rw_ios_per_sec": 0, 00:08:30.553 "rw_mbytes_per_sec": 0, 00:08:30.553 "r_mbytes_per_sec": 0, 00:08:30.553 "w_mbytes_per_sec": 0 00:08:30.553 }, 00:08:30.553 "claimed": true, 00:08:30.553 "claim_type": "exclusive_write", 00:08:30.553 "zoned": false, 00:08:30.553 "supported_io_types": { 00:08:30.553 "read": true, 00:08:30.553 "write": true, 00:08:30.553 "unmap": true, 00:08:30.553 "flush": true, 00:08:30.553 "reset": true, 00:08:30.553 "nvme_admin": false, 00:08:30.553 "nvme_io": false, 00:08:30.553 "nvme_io_md": false, 00:08:30.553 "write_zeroes": true, 00:08:30.553 "zcopy": true, 00:08:30.553 "get_zone_info": false, 00:08:30.553 "zone_management": false, 00:08:30.553 "zone_append": false, 00:08:30.553 "compare": false, 00:08:30.553 "compare_and_write": false, 00:08:30.553 "abort": true, 00:08:30.553 "seek_hole": false, 00:08:30.553 "seek_data": false, 00:08:30.553 "copy": true, 00:08:30.553 "nvme_iov_md": false 00:08:30.553 }, 00:08:30.553 "memory_domains": [ 00:08:30.553 { 00:08:30.553 "dma_device_id": "system", 00:08:30.553 "dma_device_type": 1 00:08:30.553 }, 00:08:30.553 { 00:08:30.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.553 "dma_device_type": 2 00:08:30.553 } 00:08:30.553 ], 00:08:30.553 "driver_specific": {} 00:08:30.553 }' 00:08:30.553 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:30.553 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:30.553 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:30.553 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:30.553 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:30.553 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:30.553 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:30.553 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:30.813 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:30.813 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:30.813 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:30.813 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:30.813 06:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:31.073 [2024-08-14 06:39:58.144829] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.073 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:31.333 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:31.333 "name": "Existed_Raid", 00:08:31.333 "uuid": "6f323e16-6938-4f89-a816-d98be44b9c55", 00:08:31.333 "strip_size_kb": 0, 00:08:31.333 "state": "online", 00:08:31.333 "raid_level": "raid1", 00:08:31.333 "superblock": true, 00:08:31.333 "num_base_bdevs": 2, 00:08:31.333 "num_base_bdevs_discovered": 1, 00:08:31.333 "num_base_bdevs_operational": 1, 00:08:31.333 "base_bdevs_list": [ 00:08:31.333 { 00:08:31.333 "name": null, 00:08:31.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.333 "is_configured": false, 00:08:31.333 "data_offset": 2048, 00:08:31.333 "data_size": 63488 00:08:31.333 }, 00:08:31.333 { 00:08:31.333 "name": "BaseBdev2", 00:08:31.333 "uuid": "1b3e32a6-51ff-420a-a503-bd7031725953", 00:08:31.333 "is_configured": true, 00:08:31.333 "data_offset": 2048, 00:08:31.333 "data_size": 63488 00:08:31.333 } 00:08:31.333 ] 00:08:31.333 }' 00:08:31.333 06:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:31.333 06:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.904 06:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:31.904 06:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:31.904 06:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:31.904 06:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:32.164 06:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:32.164 06:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:32.164 06:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:32.424 [2024-08-14 06:39:59.570471] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:32.424 [2024-08-14 06:39:59.570608] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.424 [2024-08-14 06:39:59.582882] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.424 [2024-08-14 06:39:59.582950] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.424 [2024-08-14 06:39:59.582963] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:32.424 06:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:32.424 06:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:32.424 06:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:32.424 06:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.685 06:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:32.685 06:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:32.685 06:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:32.685 06:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 74163 00:08:32.685 06:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 74163 ']' 00:08:32.685 06:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 74163 00:08:32.685 06:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:08:32.685 06:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:32.685 06:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74163 00:08:32.685 06:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:32.685 06:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:32.685 06:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74163' 00:08:32.685 killing process with pid 74163 00:08:32.685 06:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 74163 00:08:32.685 [2024-08-14 06:39:59.852613] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.685 06:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 74163 00:08:32.685 [2024-08-14 06:39:59.853807] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.945 06:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:08:32.945 00:08:32.945 real 0m11.186s 00:08:32.945 user 0m20.239s 00:08:32.945 sys 0m1.626s 00:08:32.945 ************************************ 00:08:32.945 END TEST raid_state_function_test_sb 00:08:32.945 ************************************ 00:08:32.945 06:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.945 06:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.945 06:40:00 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:32.945 06:40:00 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:32.945 06:40:00 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:32.945 06:40:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.945 ************************************ 00:08:32.945 START TEST raid_superblock_test 00:08:32.945 ************************************ 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=74519 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 74519 /var/tmp/spdk-raid.sock 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 74519 ']' 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:32.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:32.945 06:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.205 [2024-08-14 06:40:00.270217] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:08:33.205 [2024-08-14 06:40:00.270465] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74519 ] 00:08:33.205 [2024-08-14 06:40:00.417720] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.465 [2024-08-14 06:40:00.472781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.465 [2024-08-14 06:40:00.518487] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.465 [2024-08-14 06:40:00.518623] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.034 06:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:34.034 06:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:08:34.034 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:08:34.034 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:08:34.034 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:08:34.034 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:08:34.034 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:34.034 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:34.034 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:08:34.034 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:34.034 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:34.294 malloc1 00:08:34.294 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:34.554 [2024-08-14 06:40:01.717368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:34.554 [2024-08-14 06:40:01.717472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.554 [2024-08-14 06:40:01.717501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:34.554 [2024-08-14 06:40:01.717520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.554 [2024-08-14 06:40:01.720081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.554 [2024-08-14 06:40:01.720134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:34.554 pt1 00:08:34.554 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:08:34.554 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:08:34.554 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:08:34.554 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:08:34.554 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:34.554 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:34.554 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:08:34.554 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:34.554 06:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:34.813 malloc2 00:08:34.813 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:35.073 [2024-08-14 06:40:02.234482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:35.073 [2024-08-14 06:40:02.234672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.073 [2024-08-14 06:40:02.234720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:35.073 [2024-08-14 06:40:02.234759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.073 [2024-08-14 06:40:02.237374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.073 [2024-08-14 06:40:02.237494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:35.073 pt2 00:08:35.073 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:08:35.073 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:08:35.073 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:08:35.334 [2024-08-14 06:40:02.474122] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:35.334 [2024-08-14 06:40:02.476425] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:35.334 [2024-08-14 06:40:02.476685] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:35.334 [2024-08-14 06:40:02.476742] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:35.334 [2024-08-14 06:40:02.477121] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:35.334 [2024-08-14 06:40:02.477349] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:35.334 [2024-08-14 06:40:02.477405] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:35.334 [2024-08-14 06:40:02.477645] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.334 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:35.334 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:35.334 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:35.334 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:35.334 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:35.334 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:35.334 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:35.334 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:35.334 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:35.334 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:35.334 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:35.334 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.594 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:35.594 "name": "raid_bdev1", 00:08:35.594 "uuid": "5c4ec1c7-c9a2-4c91-a810-b1e0d5443434", 00:08:35.594 "strip_size_kb": 0, 00:08:35.594 "state": "online", 00:08:35.594 "raid_level": "raid1", 00:08:35.594 "superblock": true, 00:08:35.594 "num_base_bdevs": 2, 00:08:35.594 "num_base_bdevs_discovered": 2, 00:08:35.594 "num_base_bdevs_operational": 2, 00:08:35.594 "base_bdevs_list": [ 00:08:35.594 { 00:08:35.594 "name": "pt1", 00:08:35.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.594 "is_configured": true, 00:08:35.594 "data_offset": 2048, 00:08:35.594 "data_size": 63488 00:08:35.594 }, 00:08:35.594 { 00:08:35.594 "name": "pt2", 00:08:35.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.594 "is_configured": true, 00:08:35.594 "data_offset": 2048, 00:08:35.594 "data_size": 63488 00:08:35.594 } 00:08:35.594 ] 00:08:35.594 }' 00:08:35.594 06:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:35.594 06:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.180 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:08:36.181 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:36.181 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:36.181 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:36.181 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:36.181 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:36.181 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:36.181 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:36.440 [2024-08-14 06:40:03.632576] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.440 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:36.440 "name": "raid_bdev1", 00:08:36.440 "aliases": [ 00:08:36.440 "5c4ec1c7-c9a2-4c91-a810-b1e0d5443434" 00:08:36.441 ], 00:08:36.441 "product_name": "Raid Volume", 00:08:36.441 "block_size": 512, 00:08:36.441 "num_blocks": 63488, 00:08:36.441 "uuid": "5c4ec1c7-c9a2-4c91-a810-b1e0d5443434", 00:08:36.441 "assigned_rate_limits": { 00:08:36.441 "rw_ios_per_sec": 0, 00:08:36.441 "rw_mbytes_per_sec": 0, 00:08:36.441 "r_mbytes_per_sec": 0, 00:08:36.441 "w_mbytes_per_sec": 0 00:08:36.441 }, 00:08:36.441 "claimed": false, 00:08:36.441 "zoned": false, 00:08:36.441 "supported_io_types": { 00:08:36.441 "read": true, 00:08:36.441 "write": true, 00:08:36.441 "unmap": false, 00:08:36.441 "flush": false, 00:08:36.441 "reset": true, 00:08:36.441 "nvme_admin": false, 00:08:36.441 "nvme_io": false, 00:08:36.441 "nvme_io_md": false, 00:08:36.441 "write_zeroes": true, 00:08:36.441 "zcopy": false, 00:08:36.441 "get_zone_info": false, 00:08:36.441 "zone_management": false, 00:08:36.441 "zone_append": false, 00:08:36.441 "compare": false, 00:08:36.441 "compare_and_write": false, 00:08:36.441 "abort": false, 00:08:36.441 "seek_hole": false, 00:08:36.441 "seek_data": false, 00:08:36.441 "copy": false, 00:08:36.441 "nvme_iov_md": false 00:08:36.441 }, 00:08:36.441 "memory_domains": [ 00:08:36.441 { 00:08:36.441 "dma_device_id": "system", 00:08:36.441 "dma_device_type": 1 00:08:36.441 }, 00:08:36.441 { 00:08:36.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.441 "dma_device_type": 2 00:08:36.441 }, 00:08:36.441 { 00:08:36.441 "dma_device_id": "system", 00:08:36.441 "dma_device_type": 1 00:08:36.441 }, 00:08:36.441 { 00:08:36.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.441 "dma_device_type": 2 00:08:36.441 } 00:08:36.441 ], 00:08:36.441 "driver_specific": { 00:08:36.441 "raid": { 00:08:36.441 "uuid": "5c4ec1c7-c9a2-4c91-a810-b1e0d5443434", 00:08:36.441 "strip_size_kb": 0, 00:08:36.441 "state": "online", 00:08:36.441 "raid_level": "raid1", 00:08:36.441 "superblock": true, 00:08:36.441 "num_base_bdevs": 2, 00:08:36.441 "num_base_bdevs_discovered": 2, 00:08:36.441 "num_base_bdevs_operational": 2, 00:08:36.441 "base_bdevs_list": [ 00:08:36.441 { 00:08:36.441 "name": "pt1", 00:08:36.441 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:36.441 "is_configured": true, 00:08:36.441 "data_offset": 2048, 00:08:36.441 "data_size": 63488 00:08:36.441 }, 00:08:36.441 { 00:08:36.441 "name": "pt2", 00:08:36.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:36.441 "is_configured": true, 00:08:36.441 "data_offset": 2048, 00:08:36.441 "data_size": 63488 00:08:36.441 } 00:08:36.441 ] 00:08:36.441 } 00:08:36.441 } 00:08:36.441 }' 00:08:36.441 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.699 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:36.699 pt2' 00:08:36.699 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:36.699 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:36.699 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:36.958 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:36.958 "name": "pt1", 00:08:36.958 "aliases": [ 00:08:36.958 "00000000-0000-0000-0000-000000000001" 00:08:36.958 ], 00:08:36.958 "product_name": "passthru", 00:08:36.958 "block_size": 512, 00:08:36.958 "num_blocks": 65536, 00:08:36.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:36.958 "assigned_rate_limits": { 00:08:36.958 "rw_ios_per_sec": 0, 00:08:36.958 "rw_mbytes_per_sec": 0, 00:08:36.958 "r_mbytes_per_sec": 0, 00:08:36.958 "w_mbytes_per_sec": 0 00:08:36.958 }, 00:08:36.958 "claimed": true, 00:08:36.958 "claim_type": "exclusive_write", 00:08:36.958 "zoned": false, 00:08:36.958 "supported_io_types": { 00:08:36.958 "read": true, 00:08:36.958 "write": true, 00:08:36.958 "unmap": true, 00:08:36.958 "flush": true, 00:08:36.958 "reset": true, 00:08:36.958 "nvme_admin": false, 00:08:36.958 "nvme_io": false, 00:08:36.958 "nvme_io_md": false, 00:08:36.958 "write_zeroes": true, 00:08:36.958 "zcopy": true, 00:08:36.958 "get_zone_info": false, 00:08:36.958 "zone_management": false, 00:08:36.958 "zone_append": false, 00:08:36.958 "compare": false, 00:08:36.958 "compare_and_write": false, 00:08:36.958 "abort": true, 00:08:36.958 "seek_hole": false, 00:08:36.958 "seek_data": false, 00:08:36.958 "copy": true, 00:08:36.958 "nvme_iov_md": false 00:08:36.958 }, 00:08:36.958 "memory_domains": [ 00:08:36.958 { 00:08:36.958 "dma_device_id": "system", 00:08:36.958 "dma_device_type": 1 00:08:36.958 }, 00:08:36.958 { 00:08:36.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.959 "dma_device_type": 2 00:08:36.959 } 00:08:36.959 ], 00:08:36.959 "driver_specific": { 00:08:36.959 "passthru": { 00:08:36.959 "name": "pt1", 00:08:36.959 "base_bdev_name": "malloc1" 00:08:36.959 } 00:08:36.959 } 00:08:36.959 }' 00:08:36.959 06:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:36.959 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:36.959 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:36.959 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:36.959 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:36.959 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:36.959 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:37.218 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:37.218 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:37.218 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:37.218 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:37.218 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:37.218 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:37.218 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:37.218 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:37.477 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:37.477 "name": "pt2", 00:08:37.477 "aliases": [ 00:08:37.477 "00000000-0000-0000-0000-000000000002" 00:08:37.477 ], 00:08:37.477 "product_name": "passthru", 00:08:37.477 "block_size": 512, 00:08:37.477 "num_blocks": 65536, 00:08:37.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:37.477 "assigned_rate_limits": { 00:08:37.477 "rw_ios_per_sec": 0, 00:08:37.477 "rw_mbytes_per_sec": 0, 00:08:37.477 "r_mbytes_per_sec": 0, 00:08:37.477 "w_mbytes_per_sec": 0 00:08:37.477 }, 00:08:37.477 "claimed": true, 00:08:37.477 "claim_type": "exclusive_write", 00:08:37.477 "zoned": false, 00:08:37.477 "supported_io_types": { 00:08:37.477 "read": true, 00:08:37.477 "write": true, 00:08:37.477 "unmap": true, 00:08:37.477 "flush": true, 00:08:37.477 "reset": true, 00:08:37.477 "nvme_admin": false, 00:08:37.477 "nvme_io": false, 00:08:37.477 "nvme_io_md": false, 00:08:37.477 "write_zeroes": true, 00:08:37.477 "zcopy": true, 00:08:37.477 "get_zone_info": false, 00:08:37.477 "zone_management": false, 00:08:37.477 "zone_append": false, 00:08:37.477 "compare": false, 00:08:37.477 "compare_and_write": false, 00:08:37.477 "abort": true, 00:08:37.477 "seek_hole": false, 00:08:37.477 "seek_data": false, 00:08:37.477 "copy": true, 00:08:37.477 "nvme_iov_md": false 00:08:37.477 }, 00:08:37.477 "memory_domains": [ 00:08:37.477 { 00:08:37.477 "dma_device_id": "system", 00:08:37.477 "dma_device_type": 1 00:08:37.477 }, 00:08:37.477 { 00:08:37.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.477 "dma_device_type": 2 00:08:37.477 } 00:08:37.477 ], 00:08:37.477 "driver_specific": { 00:08:37.477 "passthru": { 00:08:37.477 "name": "pt2", 00:08:37.477 "base_bdev_name": "malloc2" 00:08:37.477 } 00:08:37.477 } 00:08:37.477 }' 00:08:37.477 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:37.477 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:37.477 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:37.477 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:37.737 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:37.737 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:37.737 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:37.737 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:37.737 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:37.737 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:37.737 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:37.737 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:37.996 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:37.996 06:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:08:37.996 [2024-08-14 06:40:05.217983] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.996 06:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=5c4ec1c7-c9a2-4c91-a810-b1e0d5443434 00:08:37.996 06:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 5c4ec1c7-c9a2-4c91-a810-b1e0d5443434 ']' 00:08:37.996 06:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:38.564 [2024-08-14 06:40:05.521262] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:38.564 [2024-08-14 06:40:05.521305] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.564 [2024-08-14 06:40:05.521414] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.564 [2024-08-14 06:40:05.521495] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.564 [2024-08-14 06:40:05.521520] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:38.564 06:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:38.564 06:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:08:38.564 06:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:08:38.564 06:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:08:38.564 06:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:08:38.564 06:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:38.822 06:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:08:38.823 06:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:39.081 06:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:39.081 06:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:39.341 06:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:08:39.341 06:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:39.341 06:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:08:39.341 06:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:39.341 06:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.341 06:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:08:39.341 06:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.341 06:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:08:39.341 06:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.341 06:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:08:39.341 06:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.341 06:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:39.341 06:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:39.601 [2024-08-14 06:40:06.763171] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:39.601 [2024-08-14 06:40:06.765374] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:39.601 [2024-08-14 06:40:06.765459] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:39.601 [2024-08-14 06:40:06.765519] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:39.601 [2024-08-14 06:40:06.765536] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:39.601 [2024-08-14 06:40:06.765550] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:39.601 request: 00:08:39.601 { 00:08:39.601 "name": "raid_bdev1", 00:08:39.601 "raid_level": "raid1", 00:08:39.601 "base_bdevs": [ 00:08:39.601 "malloc1", 00:08:39.601 "malloc2" 00:08:39.601 ], 00:08:39.601 "superblock": false, 00:08:39.601 "method": "bdev_raid_create", 00:08:39.601 "req_id": 1 00:08:39.601 } 00:08:39.601 Got JSON-RPC error response 00:08:39.601 response: 00:08:39.601 { 00:08:39.601 "code": -17, 00:08:39.601 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:39.601 } 00:08:39.601 06:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:08:39.601 06:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:08:39.601 06:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:08:39.601 06:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:08:39.601 06:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:08:39.601 06:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:39.860 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:08:39.860 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:08:39.860 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:40.120 [2024-08-14 06:40:07.254313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:40.120 [2024-08-14 06:40:07.254407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.120 [2024-08-14 06:40:07.254428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:40.120 [2024-08-14 06:40:07.254441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.120 [2024-08-14 06:40:07.256988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.120 [2024-08-14 06:40:07.257053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:40.120 [2024-08-14 06:40:07.257154] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:40.120 [2024-08-14 06:40:07.257235] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:40.120 pt1 00:08:40.120 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:40.120 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:40.120 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:40.120 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:40.120 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:40.120 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:40.120 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:40.120 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:40.120 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:40.120 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:40.120 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.120 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:40.379 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:40.379 "name": "raid_bdev1", 00:08:40.379 "uuid": "5c4ec1c7-c9a2-4c91-a810-b1e0d5443434", 00:08:40.379 "strip_size_kb": 0, 00:08:40.379 "state": "configuring", 00:08:40.379 "raid_level": "raid1", 00:08:40.379 "superblock": true, 00:08:40.379 "num_base_bdevs": 2, 00:08:40.379 "num_base_bdevs_discovered": 1, 00:08:40.379 "num_base_bdevs_operational": 2, 00:08:40.379 "base_bdevs_list": [ 00:08:40.379 { 00:08:40.379 "name": "pt1", 00:08:40.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:40.379 "is_configured": true, 00:08:40.379 "data_offset": 2048, 00:08:40.379 "data_size": 63488 00:08:40.379 }, 00:08:40.379 { 00:08:40.379 "name": null, 00:08:40.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.379 "is_configured": false, 00:08:40.379 "data_offset": 2048, 00:08:40.379 "data_size": 63488 00:08:40.379 } 00:08:40.379 ] 00:08:40.379 }' 00:08:40.379 06:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:40.380 06:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.949 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:08:40.949 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:08:40.949 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:08:40.949 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:41.209 [2024-08-14 06:40:08.420557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:41.209 [2024-08-14 06:40:08.420658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.209 [2024-08-14 06:40:08.420680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:41.209 [2024-08-14 06:40:08.420693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.209 [2024-08-14 06:40:08.421186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.209 [2024-08-14 06:40:08.421222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:41.209 [2024-08-14 06:40:08.421311] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:41.209 [2024-08-14 06:40:08.421338] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:41.209 [2024-08-14 06:40:08.421453] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:41.209 [2024-08-14 06:40:08.421479] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:41.209 [2024-08-14 06:40:08.421764] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:41.209 [2024-08-14 06:40:08.421920] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:41.209 [2024-08-14 06:40:08.421931] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:41.209 [2024-08-14 06:40:08.422050] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.209 pt2 00:08:41.209 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:08:41.209 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:08:41.209 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:41.209 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:41.209 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:41.209 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:41.209 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:41.209 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:41.209 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:41.209 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:41.209 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:41.209 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:41.209 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:41.209 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.468 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:41.468 "name": "raid_bdev1", 00:08:41.468 "uuid": "5c4ec1c7-c9a2-4c91-a810-b1e0d5443434", 00:08:41.468 "strip_size_kb": 0, 00:08:41.468 "state": "online", 00:08:41.468 "raid_level": "raid1", 00:08:41.468 "superblock": true, 00:08:41.468 "num_base_bdevs": 2, 00:08:41.468 "num_base_bdevs_discovered": 2, 00:08:41.468 "num_base_bdevs_operational": 2, 00:08:41.468 "base_bdevs_list": [ 00:08:41.468 { 00:08:41.468 "name": "pt1", 00:08:41.468 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.468 "is_configured": true, 00:08:41.468 "data_offset": 2048, 00:08:41.468 "data_size": 63488 00:08:41.468 }, 00:08:41.468 { 00:08:41.468 "name": "pt2", 00:08:41.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.468 "is_configured": true, 00:08:41.468 "data_offset": 2048, 00:08:41.468 "data_size": 63488 00:08:41.468 } 00:08:41.468 ] 00:08:41.468 }' 00:08:41.468 06:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:41.468 06:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.408 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:08:42.408 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:42.408 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:42.408 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:42.408 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:42.408 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:42.408 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:42.408 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:42.408 [2024-08-14 06:40:09.591236] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.408 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:42.408 "name": "raid_bdev1", 00:08:42.408 "aliases": [ 00:08:42.408 "5c4ec1c7-c9a2-4c91-a810-b1e0d5443434" 00:08:42.408 ], 00:08:42.408 "product_name": "Raid Volume", 00:08:42.408 "block_size": 512, 00:08:42.408 "num_blocks": 63488, 00:08:42.408 "uuid": "5c4ec1c7-c9a2-4c91-a810-b1e0d5443434", 00:08:42.408 "assigned_rate_limits": { 00:08:42.408 "rw_ios_per_sec": 0, 00:08:42.408 "rw_mbytes_per_sec": 0, 00:08:42.408 "r_mbytes_per_sec": 0, 00:08:42.408 "w_mbytes_per_sec": 0 00:08:42.408 }, 00:08:42.408 "claimed": false, 00:08:42.408 "zoned": false, 00:08:42.408 "supported_io_types": { 00:08:42.408 "read": true, 00:08:42.408 "write": true, 00:08:42.408 "unmap": false, 00:08:42.408 "flush": false, 00:08:42.408 "reset": true, 00:08:42.408 "nvme_admin": false, 00:08:42.408 "nvme_io": false, 00:08:42.408 "nvme_io_md": false, 00:08:42.408 "write_zeroes": true, 00:08:42.408 "zcopy": false, 00:08:42.408 "get_zone_info": false, 00:08:42.408 "zone_management": false, 00:08:42.408 "zone_append": false, 00:08:42.408 "compare": false, 00:08:42.408 "compare_and_write": false, 00:08:42.408 "abort": false, 00:08:42.408 "seek_hole": false, 00:08:42.408 "seek_data": false, 00:08:42.408 "copy": false, 00:08:42.408 "nvme_iov_md": false 00:08:42.408 }, 00:08:42.408 "memory_domains": [ 00:08:42.408 { 00:08:42.408 "dma_device_id": "system", 00:08:42.408 "dma_device_type": 1 00:08:42.408 }, 00:08:42.408 { 00:08:42.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.408 "dma_device_type": 2 00:08:42.408 }, 00:08:42.408 { 00:08:42.408 "dma_device_id": "system", 00:08:42.408 "dma_device_type": 1 00:08:42.408 }, 00:08:42.408 { 00:08:42.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.408 "dma_device_type": 2 00:08:42.408 } 00:08:42.408 ], 00:08:42.408 "driver_specific": { 00:08:42.408 "raid": { 00:08:42.408 "uuid": "5c4ec1c7-c9a2-4c91-a810-b1e0d5443434", 00:08:42.408 "strip_size_kb": 0, 00:08:42.408 "state": "online", 00:08:42.408 "raid_level": "raid1", 00:08:42.408 "superblock": true, 00:08:42.408 "num_base_bdevs": 2, 00:08:42.408 "num_base_bdevs_discovered": 2, 00:08:42.408 "num_base_bdevs_operational": 2, 00:08:42.408 "base_bdevs_list": [ 00:08:42.408 { 00:08:42.408 "name": "pt1", 00:08:42.408 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.408 "is_configured": true, 00:08:42.408 "data_offset": 2048, 00:08:42.408 "data_size": 63488 00:08:42.408 }, 00:08:42.408 { 00:08:42.408 "name": "pt2", 00:08:42.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.408 "is_configured": true, 00:08:42.408 "data_offset": 2048, 00:08:42.408 "data_size": 63488 00:08:42.408 } 00:08:42.408 ] 00:08:42.408 } 00:08:42.408 } 00:08:42.408 }' 00:08:42.408 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.408 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:42.408 pt2' 00:08:42.408 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:42.408 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:42.408 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:42.668 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:42.668 "name": "pt1", 00:08:42.668 "aliases": [ 00:08:42.668 "00000000-0000-0000-0000-000000000001" 00:08:42.668 ], 00:08:42.668 "product_name": "passthru", 00:08:42.668 "block_size": 512, 00:08:42.668 "num_blocks": 65536, 00:08:42.668 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.668 "assigned_rate_limits": { 00:08:42.668 "rw_ios_per_sec": 0, 00:08:42.668 "rw_mbytes_per_sec": 0, 00:08:42.668 "r_mbytes_per_sec": 0, 00:08:42.668 "w_mbytes_per_sec": 0 00:08:42.668 }, 00:08:42.668 "claimed": true, 00:08:42.668 "claim_type": "exclusive_write", 00:08:42.668 "zoned": false, 00:08:42.668 "supported_io_types": { 00:08:42.668 "read": true, 00:08:42.668 "write": true, 00:08:42.668 "unmap": true, 00:08:42.668 "flush": true, 00:08:42.668 "reset": true, 00:08:42.668 "nvme_admin": false, 00:08:42.668 "nvme_io": false, 00:08:42.668 "nvme_io_md": false, 00:08:42.668 "write_zeroes": true, 00:08:42.668 "zcopy": true, 00:08:42.668 "get_zone_info": false, 00:08:42.668 "zone_management": false, 00:08:42.668 "zone_append": false, 00:08:42.668 "compare": false, 00:08:42.668 "compare_and_write": false, 00:08:42.668 "abort": true, 00:08:42.668 "seek_hole": false, 00:08:42.668 "seek_data": false, 00:08:42.668 "copy": true, 00:08:42.668 "nvme_iov_md": false 00:08:42.668 }, 00:08:42.668 "memory_domains": [ 00:08:42.668 { 00:08:42.668 "dma_device_id": "system", 00:08:42.668 "dma_device_type": 1 00:08:42.668 }, 00:08:42.668 { 00:08:42.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.668 "dma_device_type": 2 00:08:42.668 } 00:08:42.668 ], 00:08:42.668 "driver_specific": { 00:08:42.668 "passthru": { 00:08:42.668 "name": "pt1", 00:08:42.668 "base_bdev_name": "malloc1" 00:08:42.668 } 00:08:42.668 } 00:08:42.668 }' 00:08:42.668 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:42.927 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:42.927 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:42.927 06:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:42.927 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:42.927 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:42.927 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:42.927 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:42.927 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:42.927 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:43.186 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:43.186 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:43.186 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:43.186 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:43.187 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:43.501 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:43.501 "name": "pt2", 00:08:43.501 "aliases": [ 00:08:43.501 "00000000-0000-0000-0000-000000000002" 00:08:43.501 ], 00:08:43.501 "product_name": "passthru", 00:08:43.501 "block_size": 512, 00:08:43.501 "num_blocks": 65536, 00:08:43.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.501 "assigned_rate_limits": { 00:08:43.501 "rw_ios_per_sec": 0, 00:08:43.501 "rw_mbytes_per_sec": 0, 00:08:43.501 "r_mbytes_per_sec": 0, 00:08:43.501 "w_mbytes_per_sec": 0 00:08:43.501 }, 00:08:43.501 "claimed": true, 00:08:43.501 "claim_type": "exclusive_write", 00:08:43.501 "zoned": false, 00:08:43.501 "supported_io_types": { 00:08:43.501 "read": true, 00:08:43.501 "write": true, 00:08:43.501 "unmap": true, 00:08:43.501 "flush": true, 00:08:43.501 "reset": true, 00:08:43.501 "nvme_admin": false, 00:08:43.501 "nvme_io": false, 00:08:43.501 "nvme_io_md": false, 00:08:43.501 "write_zeroes": true, 00:08:43.501 "zcopy": true, 00:08:43.501 "get_zone_info": false, 00:08:43.501 "zone_management": false, 00:08:43.501 "zone_append": false, 00:08:43.501 "compare": false, 00:08:43.501 "compare_and_write": false, 00:08:43.501 "abort": true, 00:08:43.501 "seek_hole": false, 00:08:43.501 "seek_data": false, 00:08:43.501 "copy": true, 00:08:43.501 "nvme_iov_md": false 00:08:43.501 }, 00:08:43.501 "memory_domains": [ 00:08:43.501 { 00:08:43.501 "dma_device_id": "system", 00:08:43.501 "dma_device_type": 1 00:08:43.501 }, 00:08:43.501 { 00:08:43.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.501 "dma_device_type": 2 00:08:43.501 } 00:08:43.501 ], 00:08:43.501 "driver_specific": { 00:08:43.501 "passthru": { 00:08:43.501 "name": "pt2", 00:08:43.501 "base_bdev_name": "malloc2" 00:08:43.501 } 00:08:43.501 } 00:08:43.501 }' 00:08:43.502 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:43.502 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:43.502 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:43.502 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:43.502 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:43.502 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:43.502 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:43.776 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:43.777 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:43.777 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:43.777 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:43.777 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:43.777 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:43.777 06:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:08:44.035 [2024-08-14 06:40:11.132897] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.035 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 5c4ec1c7-c9a2-4c91-a810-b1e0d5443434 '!=' 5c4ec1c7-c9a2-4c91-a810-b1e0d5443434 ']' 00:08:44.035 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:08:44.035 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:44.035 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:44.035 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:44.294 [2024-08-14 06:40:11.388248] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:44.294 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:44.294 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:44.294 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:44.294 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:44.294 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:44.294 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:44.294 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:44.294 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:44.294 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:44.294 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:44.294 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.294 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:44.553 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:44.553 "name": "raid_bdev1", 00:08:44.553 "uuid": "5c4ec1c7-c9a2-4c91-a810-b1e0d5443434", 00:08:44.553 "strip_size_kb": 0, 00:08:44.553 "state": "online", 00:08:44.553 "raid_level": "raid1", 00:08:44.553 "superblock": true, 00:08:44.553 "num_base_bdevs": 2, 00:08:44.553 "num_base_bdevs_discovered": 1, 00:08:44.553 "num_base_bdevs_operational": 1, 00:08:44.553 "base_bdevs_list": [ 00:08:44.553 { 00:08:44.553 "name": null, 00:08:44.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.553 "is_configured": false, 00:08:44.553 "data_offset": 2048, 00:08:44.553 "data_size": 63488 00:08:44.553 }, 00:08:44.553 { 00:08:44.553 "name": "pt2", 00:08:44.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.553 "is_configured": true, 00:08:44.553 "data_offset": 2048, 00:08:44.553 "data_size": 63488 00:08:44.553 } 00:08:44.553 ] 00:08:44.553 }' 00:08:44.553 06:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:44.553 06:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.120 06:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:45.379 [2024-08-14 06:40:12.558233] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:45.379 [2024-08-14 06:40:12.558280] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.379 [2024-08-14 06:40:12.558375] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.379 [2024-08-14 06:40:12.558428] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.379 [2024-08-14 06:40:12.558449] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:45.379 06:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:08:45.379 06:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:45.638 06:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:08:45.638 06:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:08:45.638 06:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:08:45.638 06:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:08:45.638 06:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:45.897 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:08:45.897 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:08:45.897 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:08:45.897 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:08:45.897 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=1 00:08:45.897 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:46.156 [2024-08-14 06:40:13.317382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:46.156 [2024-08-14 06:40:13.317482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.156 [2024-08-14 06:40:13.317506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:46.156 [2024-08-14 06:40:13.317519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.156 [2024-08-14 06:40:13.319963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.156 [2024-08-14 06:40:13.320021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:46.156 [2024-08-14 06:40:13.320117] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:46.156 [2024-08-14 06:40:13.320162] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:46.156 [2024-08-14 06:40:13.320272] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:46.156 [2024-08-14 06:40:13.320286] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:46.156 [2024-08-14 06:40:13.320583] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:46.156 [2024-08-14 06:40:13.320740] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:46.156 [2024-08-14 06:40:13.320760] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:46.156 [2024-08-14 06:40:13.320886] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.156 pt2 00:08:46.156 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:46.156 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:46.156 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:46.157 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:46.157 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:46.157 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:46.157 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:46.157 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:46.157 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:46.157 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:46.157 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.157 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.416 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:46.416 "name": "raid_bdev1", 00:08:46.416 "uuid": "5c4ec1c7-c9a2-4c91-a810-b1e0d5443434", 00:08:46.416 "strip_size_kb": 0, 00:08:46.416 "state": "online", 00:08:46.416 "raid_level": "raid1", 00:08:46.416 "superblock": true, 00:08:46.416 "num_base_bdevs": 2, 00:08:46.416 "num_base_bdevs_discovered": 1, 00:08:46.416 "num_base_bdevs_operational": 1, 00:08:46.416 "base_bdevs_list": [ 00:08:46.416 { 00:08:46.416 "name": null, 00:08:46.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.416 "is_configured": false, 00:08:46.416 "data_offset": 2048, 00:08:46.416 "data_size": 63488 00:08:46.416 }, 00:08:46.416 { 00:08:46.416 "name": "pt2", 00:08:46.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.416 "is_configured": true, 00:08:46.416 "data_offset": 2048, 00:08:46.416 "data_size": 63488 00:08:46.416 } 00:08:46.416 ] 00:08:46.416 }' 00:08:46.416 06:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:46.416 06:40:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.351 06:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:47.351 [2024-08-14 06:40:14.519547] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.351 [2024-08-14 06:40:14.519601] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.351 [2024-08-14 06:40:14.519687] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.351 [2024-08-14 06:40:14.519747] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.351 [2024-08-14 06:40:14.519758] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:47.351 06:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:47.351 06:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:08:47.609 06:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:08:47.609 06:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:08:47.609 06:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:08:47.609 06:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:47.869 [2024-08-14 06:40:15.050720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:47.869 [2024-08-14 06:40:15.050817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.869 [2024-08-14 06:40:15.050840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:08:47.869 [2024-08-14 06:40:15.050850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.869 [2024-08-14 06:40:15.053361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.869 [2024-08-14 06:40:15.053414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:47.869 [2024-08-14 06:40:15.053519] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:47.869 [2024-08-14 06:40:15.053568] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:47.869 [2024-08-14 06:40:15.053717] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:47.869 [2024-08-14 06:40:15.053744] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.869 [2024-08-14 06:40:15.053782] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:08:47.869 [2024-08-14 06:40:15.053834] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:47.869 [2024-08-14 06:40:15.053935] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:08:47.869 [2024-08-14 06:40:15.053950] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:47.869 [2024-08-14 06:40:15.054247] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:47.869 [2024-08-14 06:40:15.054391] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:08:47.869 [2024-08-14 06:40:15.054413] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:08:47.869 [2024-08-14 06:40:15.054590] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.869 pt1 00:08:47.869 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:08:47.869 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:47.869 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:47.869 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:47.869 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:47.869 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:47.869 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:47.869 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:47.869 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:47.869 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:47.869 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:47.869 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:47.869 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.128 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:48.128 "name": "raid_bdev1", 00:08:48.128 "uuid": "5c4ec1c7-c9a2-4c91-a810-b1e0d5443434", 00:08:48.128 "strip_size_kb": 0, 00:08:48.128 "state": "online", 00:08:48.128 "raid_level": "raid1", 00:08:48.128 "superblock": true, 00:08:48.128 "num_base_bdevs": 2, 00:08:48.128 "num_base_bdevs_discovered": 1, 00:08:48.128 "num_base_bdevs_operational": 1, 00:08:48.128 "base_bdevs_list": [ 00:08:48.128 { 00:08:48.128 "name": null, 00:08:48.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.128 "is_configured": false, 00:08:48.128 "data_offset": 2048, 00:08:48.128 "data_size": 63488 00:08:48.128 }, 00:08:48.128 { 00:08:48.128 "name": "pt2", 00:08:48.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.128 "is_configured": true, 00:08:48.128 "data_offset": 2048, 00:08:48.128 "data_size": 63488 00:08:48.128 } 00:08:48.128 ] 00:08:48.128 }' 00:08:48.128 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:48.128 06:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.065 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:08:49.065 06:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:49.065 06:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:08:49.065 06:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:49.065 06:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:08:49.324 [2024-08-14 06:40:16.424791] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.324 06:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 5c4ec1c7-c9a2-4c91-a810-b1e0d5443434 '!=' 5c4ec1c7-c9a2-4c91-a810-b1e0d5443434 ']' 00:08:49.324 06:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 74519 00:08:49.324 06:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 74519 ']' 00:08:49.324 06:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 74519 00:08:49.324 06:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:08:49.324 06:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:49.324 06:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74519 00:08:49.324 killing process with pid 74519 00:08:49.324 06:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:49.324 06:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:49.324 06:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74519' 00:08:49.324 06:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 74519 00:08:49.324 [2024-08-14 06:40:16.486417] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.324 [2024-08-14 06:40:16.486531] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.324 [2024-08-14 06:40:16.486586] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.324 [2024-08-14 06:40:16.486600] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:08:49.324 06:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 74519 00:08:49.324 [2024-08-14 06:40:16.510895] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.612 06:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:08:49.612 00:08:49.612 real 0m16.578s 00:08:49.612 user 0m30.491s 00:08:49.612 sys 0m2.470s 00:08:49.612 06:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:49.612 06:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.612 ************************************ 00:08:49.612 END TEST raid_superblock_test 00:08:49.612 ************************************ 00:08:49.612 06:40:16 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:49.612 06:40:16 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:49.612 06:40:16 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:49.612 06:40:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.612 ************************************ 00:08:49.612 START TEST raid_read_error_test 00:08:49.612 ************************************ 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 2 read 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.EJ05BQ3God 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=75037 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 75037 /var/tmp/spdk-raid.sock 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 75037 ']' 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:49.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:49.612 06:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.872 [2024-08-14 06:40:16.946765] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:08:49.872 [2024-08-14 06:40:16.947006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75037 ] 00:08:49.872 [2024-08-14 06:40:17.101135] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.130 [2024-08-14 06:40:17.156367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.130 [2024-08-14 06:40:17.202615] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.130 [2024-08-14 06:40:17.202667] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.698 06:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:50.698 06:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:08:50.698 06:40:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:08:50.698 06:40:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:50.956 BaseBdev1_malloc 00:08:50.956 06:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:51.215 true 00:08:51.215 06:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:51.474 [2024-08-14 06:40:18.525057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:51.474 [2024-08-14 06:40:18.525199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.474 [2024-08-14 06:40:18.525233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:51.474 [2024-08-14 06:40:18.525259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.474 [2024-08-14 06:40:18.527913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.474 [2024-08-14 06:40:18.527975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:51.474 BaseBdev1 00:08:51.474 06:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:08:51.474 06:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:51.733 BaseBdev2_malloc 00:08:51.733 06:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:51.992 true 00:08:51.992 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:52.250 [2024-08-14 06:40:19.253432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:52.250 [2024-08-14 06:40:19.253530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.250 [2024-08-14 06:40:19.253561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:52.250 [2024-08-14 06:40:19.253575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.250 [2024-08-14 06:40:19.256164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.250 [2024-08-14 06:40:19.256238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:52.250 BaseBdev2 00:08:52.250 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:52.250 [2024-08-14 06:40:19.489461] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.250 [2024-08-14 06:40:19.491739] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.250 [2024-08-14 06:40:19.492022] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:52.250 [2024-08-14 06:40:19.492053] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:52.250 [2024-08-14 06:40:19.492426] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:52.250 [2024-08-14 06:40:19.492640] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:52.250 [2024-08-14 06:40:19.492661] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:52.250 [2024-08-14 06:40:19.492860] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.508 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:52.508 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:52.508 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:52.508 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:52.508 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:52.508 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:52.508 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:52.508 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:52.508 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:52.508 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:52.508 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:52.508 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.508 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:52.508 "name": "raid_bdev1", 00:08:52.508 "uuid": "bd08f4f5-0373-4685-a28e-e7131e97810a", 00:08:52.508 "strip_size_kb": 0, 00:08:52.508 "state": "online", 00:08:52.508 "raid_level": "raid1", 00:08:52.508 "superblock": true, 00:08:52.508 "num_base_bdevs": 2, 00:08:52.508 "num_base_bdevs_discovered": 2, 00:08:52.508 "num_base_bdevs_operational": 2, 00:08:52.508 "base_bdevs_list": [ 00:08:52.508 { 00:08:52.508 "name": "BaseBdev1", 00:08:52.508 "uuid": "620a7739-a560-5ed5-a0bb-36183603993c", 00:08:52.508 "is_configured": true, 00:08:52.508 "data_offset": 2048, 00:08:52.508 "data_size": 63488 00:08:52.508 }, 00:08:52.508 { 00:08:52.508 "name": "BaseBdev2", 00:08:52.508 "uuid": "3e4066d1-3426-552f-bafb-c352415a65c6", 00:08:52.508 "is_configured": true, 00:08:52.508 "data_offset": 2048, 00:08:52.508 "data_size": 63488 00:08:52.508 } 00:08:52.508 ] 00:08:52.508 }' 00:08:52.508 06:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:52.508 06:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.443 06:40:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:08:53.443 06:40:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:53.443 [2024-08-14 06:40:20.436576] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.385 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.644 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:54.644 "name": "raid_bdev1", 00:08:54.644 "uuid": "bd08f4f5-0373-4685-a28e-e7131e97810a", 00:08:54.644 "strip_size_kb": 0, 00:08:54.644 "state": "online", 00:08:54.644 "raid_level": "raid1", 00:08:54.644 "superblock": true, 00:08:54.644 "num_base_bdevs": 2, 00:08:54.644 "num_base_bdevs_discovered": 2, 00:08:54.645 "num_base_bdevs_operational": 2, 00:08:54.645 "base_bdevs_list": [ 00:08:54.645 { 00:08:54.645 "name": "BaseBdev1", 00:08:54.645 "uuid": "620a7739-a560-5ed5-a0bb-36183603993c", 00:08:54.645 "is_configured": true, 00:08:54.645 "data_offset": 2048, 00:08:54.645 "data_size": 63488 00:08:54.645 }, 00:08:54.645 { 00:08:54.645 "name": "BaseBdev2", 00:08:54.645 "uuid": "3e4066d1-3426-552f-bafb-c352415a65c6", 00:08:54.645 "is_configured": true, 00:08:54.645 "data_offset": 2048, 00:08:54.645 "data_size": 63488 00:08:54.645 } 00:08:54.645 ] 00:08:54.645 }' 00:08:54.645 06:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:54.645 06:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.582 06:40:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:55.582 [2024-08-14 06:40:22.754846] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.582 [2024-08-14 06:40:22.754901] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.582 [2024-08-14 06:40:22.757838] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.582 [2024-08-14 06:40:22.757904] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.582 [2024-08-14 06:40:22.757998] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.582 [2024-08-14 06:40:22.758013] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:55.582 0 00:08:55.582 06:40:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 75037 00:08:55.582 06:40:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 75037 ']' 00:08:55.582 06:40:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 75037 00:08:55.582 06:40:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:08:55.582 06:40:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:55.582 06:40:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75037 00:08:55.582 06:40:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:55.582 06:40:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:55.582 killing process with pid 75037 00:08:55.582 06:40:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75037' 00:08:55.582 06:40:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 75037 00:08:55.582 [2024-08-14 06:40:22.821831] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.582 06:40:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 75037 00:08:55.841 [2024-08-14 06:40:22.838635] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.841 06:40:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.EJ05BQ3God 00:08:55.841 06:40:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:08:55.841 06:40:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:08:55.841 06:40:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:08:55.841 06:40:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:08:56.101 06:40:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:56.101 06:40:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:56.101 06:40:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:56.101 00:08:56.101 real 0m6.271s 00:08:56.101 user 0m9.946s 00:08:56.101 sys 0m0.876s 00:08:56.101 06:40:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:56.101 06:40:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.101 ************************************ 00:08:56.101 END TEST raid_read_error_test 00:08:56.101 ************************************ 00:08:56.101 06:40:23 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:56.101 06:40:23 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:56.101 06:40:23 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:56.101 06:40:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.101 ************************************ 00:08:56.101 START TEST raid_write_error_test 00:08:56.101 ************************************ 00:08:56.101 06:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 2 write 00:08:56.101 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:08:56.101 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:08:56.101 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:08:56.101 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:08:56.101 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:56.101 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:08:56.101 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:08:56.101 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:56.101 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:08:56.101 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:08:56.101 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:56.101 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.5wK5AJxpv7 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=75207 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 75207 /var/tmp/spdk-raid.sock 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 75207 ']' 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:56.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:56.102 06:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.102 [2024-08-14 06:40:23.269843] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:08:56.102 [2024-08-14 06:40:23.269990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75207 ] 00:08:56.362 [2024-08-14 06:40:23.418086] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.362 [2024-08-14 06:40:23.473146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.362 [2024-08-14 06:40:23.519210] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.362 [2024-08-14 06:40:23.519265] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.300 06:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:57.300 06:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:08:57.300 06:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:08:57.300 06:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:57.300 BaseBdev1_malloc 00:08:57.300 06:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:57.560 true 00:08:57.560 06:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:57.819 [2024-08-14 06:40:24.922188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:57.819 [2024-08-14 06:40:24.922269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.819 [2024-08-14 06:40:24.922297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:57.819 [2024-08-14 06:40:24.922312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.819 [2024-08-14 06:40:24.924979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.819 [2024-08-14 06:40:24.925045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:57.819 BaseBdev1 00:08:57.819 06:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:08:57.819 06:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:58.078 BaseBdev2_malloc 00:08:58.078 06:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:58.337 true 00:08:58.337 06:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:58.596 [2024-08-14 06:40:25.685352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:58.596 [2024-08-14 06:40:25.685447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.596 [2024-08-14 06:40:25.685477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:58.596 [2024-08-14 06:40:25.685490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.596 [2024-08-14 06:40:25.688055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.596 [2024-08-14 06:40:25.688111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:58.596 BaseBdev2 00:08:58.596 06:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:58.856 [2024-08-14 06:40:25.929453] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.856 [2024-08-14 06:40:25.931822] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.856 [2024-08-14 06:40:25.932081] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:58.856 [2024-08-14 06:40:25.932102] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:58.856 [2024-08-14 06:40:25.932488] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:58.856 [2024-08-14 06:40:25.932684] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:58.856 [2024-08-14 06:40:25.932757] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:58.856 [2024-08-14 06:40:25.932960] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.856 06:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:58.856 06:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:58.856 06:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:58.856 06:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:58.856 06:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:58.856 06:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:58.856 06:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:58.856 06:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:58.856 06:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:58.856 06:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:58.856 06:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:58.856 06:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.115 06:40:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:59.115 "name": "raid_bdev1", 00:08:59.115 "uuid": "a13ff984-2210-42ae-916a-7524c0a1d08d", 00:08:59.115 "strip_size_kb": 0, 00:08:59.115 "state": "online", 00:08:59.115 "raid_level": "raid1", 00:08:59.115 "superblock": true, 00:08:59.115 "num_base_bdevs": 2, 00:08:59.115 "num_base_bdevs_discovered": 2, 00:08:59.115 "num_base_bdevs_operational": 2, 00:08:59.115 "base_bdevs_list": [ 00:08:59.115 { 00:08:59.115 "name": "BaseBdev1", 00:08:59.115 "uuid": "51236a0d-6087-5abf-aba8-80595da370d0", 00:08:59.115 "is_configured": true, 00:08:59.115 "data_offset": 2048, 00:08:59.115 "data_size": 63488 00:08:59.115 }, 00:08:59.115 { 00:08:59.115 "name": "BaseBdev2", 00:08:59.115 "uuid": "a906ac4e-1c11-5c07-9f4d-6d72bf8a894f", 00:08:59.115 "is_configured": true, 00:08:59.115 "data_offset": 2048, 00:08:59.115 "data_size": 63488 00:08:59.115 } 00:08:59.115 ] 00:08:59.115 }' 00:08:59.115 06:40:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:59.115 06:40:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.684 06:40:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:08:59.684 06:40:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:59.684 [2024-08-14 06:40:26.929015] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:00.621 06:40:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:00.880 [2024-08-14 06:40:28.056798] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:00.880 [2024-08-14 06:40:28.056995] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.880 [2024-08-14 06:40:28.057258] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=1 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:00.880 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.139 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:01.139 "name": "raid_bdev1", 00:09:01.139 "uuid": "a13ff984-2210-42ae-916a-7524c0a1d08d", 00:09:01.139 "strip_size_kb": 0, 00:09:01.139 "state": "online", 00:09:01.139 "raid_level": "raid1", 00:09:01.139 "superblock": true, 00:09:01.139 "num_base_bdevs": 2, 00:09:01.139 "num_base_bdevs_discovered": 1, 00:09:01.139 "num_base_bdevs_operational": 1, 00:09:01.139 "base_bdevs_list": [ 00:09:01.139 { 00:09:01.139 "name": null, 00:09:01.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.139 "is_configured": false, 00:09:01.139 "data_offset": 2048, 00:09:01.139 "data_size": 63488 00:09:01.139 }, 00:09:01.139 { 00:09:01.139 "name": "BaseBdev2", 00:09:01.139 "uuid": "a906ac4e-1c11-5c07-9f4d-6d72bf8a894f", 00:09:01.139 "is_configured": true, 00:09:01.139 "data_offset": 2048, 00:09:01.139 "data_size": 63488 00:09:01.139 } 00:09:01.139 ] 00:09:01.139 }' 00:09:01.139 06:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:01.139 06:40:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.078 06:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:02.078 [2024-08-14 06:40:29.309262] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:02.078 [2024-08-14 06:40:29.309404] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.078 [2024-08-14 06:40:29.312296] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.078 [2024-08-14 06:40:29.312354] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.078 [2024-08-14 06:40:29.312435] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.078 [2024-08-14 06:40:29.312454] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:02.078 0 00:09:02.353 06:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 75207 00:09:02.353 06:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 75207 ']' 00:09:02.353 06:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 75207 00:09:02.353 06:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:09:02.353 06:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:02.353 06:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75207 00:09:02.353 06:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:02.353 06:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:02.353 06:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75207' 00:09:02.353 killing process with pid 75207 00:09:02.353 06:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 75207 00:09:02.353 [2024-08-14 06:40:29.383392] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.353 06:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 75207 00:09:02.353 [2024-08-14 06:40:29.400071] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.619 06:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.5wK5AJxpv7 00:09:02.619 06:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:09:02.619 06:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:09:02.619 06:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:09:02.619 06:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:09:02.619 06:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:02.619 06:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:09:02.620 06:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:02.620 00:09:02.620 real 0m6.481s 00:09:02.620 user 0m10.304s 00:09:02.620 sys 0m0.925s 00:09:02.620 06:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:02.620 06:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.620 ************************************ 00:09:02.620 END TEST raid_write_error_test 00:09:02.620 ************************************ 00:09:02.620 06:40:29 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:09:02.620 06:40:29 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:09:02.620 06:40:29 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:02.620 06:40:29 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:02.620 06:40:29 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:02.620 06:40:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.620 ************************************ 00:09:02.620 START TEST raid_state_function_test 00:09:02.620 ************************************ 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 false 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=75376 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:02.620 Process raid pid: 75376 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 75376' 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 75376 /var/tmp/spdk-raid.sock 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 75376 ']' 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:02.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:02.620 06:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.620 [2024-08-14 06:40:29.844214] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:09:02.620 [2024-08-14 06:40:29.844349] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.879 [2024-08-14 06:40:29.994398] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.879 [2024-08-14 06:40:30.050727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.879 [2024-08-14 06:40:30.098495] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.879 [2024-08-14 06:40:30.098545] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.816 06:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:03.816 06:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:09:03.816 06:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:04.075 [2024-08-14 06:40:31.113097] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.075 [2024-08-14 06:40:31.113179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.075 [2024-08-14 06:40:31.113206] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.075 [2024-08-14 06:40:31.113218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.075 [2024-08-14 06:40:31.113231] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.075 [2024-08-14 06:40:31.113239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.075 06:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.075 06:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:04.075 06:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:04.075 06:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:04.075 06:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:04.075 06:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:04.075 06:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:04.075 06:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:04.075 06:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:04.075 06:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:04.075 06:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.075 06:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:04.334 06:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:04.334 "name": "Existed_Raid", 00:09:04.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.334 "strip_size_kb": 64, 00:09:04.334 "state": "configuring", 00:09:04.334 "raid_level": "raid0", 00:09:04.334 "superblock": false, 00:09:04.334 "num_base_bdevs": 3, 00:09:04.334 "num_base_bdevs_discovered": 0, 00:09:04.334 "num_base_bdevs_operational": 3, 00:09:04.334 "base_bdevs_list": [ 00:09:04.334 { 00:09:04.334 "name": "BaseBdev1", 00:09:04.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.334 "is_configured": false, 00:09:04.334 "data_offset": 0, 00:09:04.334 "data_size": 0 00:09:04.334 }, 00:09:04.334 { 00:09:04.334 "name": "BaseBdev2", 00:09:04.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.334 "is_configured": false, 00:09:04.334 "data_offset": 0, 00:09:04.334 "data_size": 0 00:09:04.334 }, 00:09:04.334 { 00:09:04.334 "name": "BaseBdev3", 00:09:04.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.334 "is_configured": false, 00:09:04.334 "data_offset": 0, 00:09:04.334 "data_size": 0 00:09:04.334 } 00:09:04.334 ] 00:09:04.334 }' 00:09:04.334 06:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:04.334 06:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.903 06:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:05.161 [2024-08-14 06:40:32.331416] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.161 [2024-08-14 06:40:32.331472] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:05.161 06:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:05.420 [2024-08-14 06:40:32.583408] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.420 [2024-08-14 06:40:32.583467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.420 [2024-08-14 06:40:32.583481] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.420 [2024-08-14 06:40:32.583490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.420 [2024-08-14 06:40:32.583499] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.421 [2024-08-14 06:40:32.583508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.421 06:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.679 [2024-08-14 06:40:32.864911] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.679 BaseBdev1 00:09:05.679 06:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:05.679 06:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:09:05.679 06:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:05.679 06:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:09:05.679 06:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:05.680 06:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:05.680 06:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:05.939 06:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:06.198 [ 00:09:06.198 { 00:09:06.198 "name": "BaseBdev1", 00:09:06.198 "aliases": [ 00:09:06.198 "686080aa-d7dc-4dc3-bafb-97c1c4487bbd" 00:09:06.198 ], 00:09:06.198 "product_name": "Malloc disk", 00:09:06.198 "block_size": 512, 00:09:06.198 "num_blocks": 65536, 00:09:06.198 "uuid": "686080aa-d7dc-4dc3-bafb-97c1c4487bbd", 00:09:06.198 "assigned_rate_limits": { 00:09:06.198 "rw_ios_per_sec": 0, 00:09:06.198 "rw_mbytes_per_sec": 0, 00:09:06.198 "r_mbytes_per_sec": 0, 00:09:06.198 "w_mbytes_per_sec": 0 00:09:06.198 }, 00:09:06.198 "claimed": true, 00:09:06.198 "claim_type": "exclusive_write", 00:09:06.198 "zoned": false, 00:09:06.198 "supported_io_types": { 00:09:06.198 "read": true, 00:09:06.198 "write": true, 00:09:06.198 "unmap": true, 00:09:06.198 "flush": true, 00:09:06.198 "reset": true, 00:09:06.198 "nvme_admin": false, 00:09:06.198 "nvme_io": false, 00:09:06.198 "nvme_io_md": false, 00:09:06.198 "write_zeroes": true, 00:09:06.198 "zcopy": true, 00:09:06.198 "get_zone_info": false, 00:09:06.198 "zone_management": false, 00:09:06.198 "zone_append": false, 00:09:06.198 "compare": false, 00:09:06.198 "compare_and_write": false, 00:09:06.198 "abort": true, 00:09:06.198 "seek_hole": false, 00:09:06.198 "seek_data": false, 00:09:06.198 "copy": true, 00:09:06.198 "nvme_iov_md": false 00:09:06.198 }, 00:09:06.198 "memory_domains": [ 00:09:06.198 { 00:09:06.198 "dma_device_id": "system", 00:09:06.198 "dma_device_type": 1 00:09:06.198 }, 00:09:06.198 { 00:09:06.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.198 "dma_device_type": 2 00:09:06.198 } 00:09:06.198 ], 00:09:06.198 "driver_specific": {} 00:09:06.198 } 00:09:06.198 ] 00:09:06.198 06:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:09:06.198 06:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:06.198 06:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:06.198 06:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:06.198 06:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:06.198 06:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:06.198 06:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:06.198 06:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:06.198 06:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:06.198 06:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:06.198 06:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:06.198 06:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:06.198 06:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.458 06:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:06.458 "name": "Existed_Raid", 00:09:06.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.458 "strip_size_kb": 64, 00:09:06.458 "state": "configuring", 00:09:06.458 "raid_level": "raid0", 00:09:06.458 "superblock": false, 00:09:06.458 "num_base_bdevs": 3, 00:09:06.458 "num_base_bdevs_discovered": 1, 00:09:06.458 "num_base_bdevs_operational": 3, 00:09:06.458 "base_bdevs_list": [ 00:09:06.458 { 00:09:06.458 "name": "BaseBdev1", 00:09:06.458 "uuid": "686080aa-d7dc-4dc3-bafb-97c1c4487bbd", 00:09:06.458 "is_configured": true, 00:09:06.458 "data_offset": 0, 00:09:06.458 "data_size": 65536 00:09:06.458 }, 00:09:06.458 { 00:09:06.458 "name": "BaseBdev2", 00:09:06.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.458 "is_configured": false, 00:09:06.458 "data_offset": 0, 00:09:06.458 "data_size": 0 00:09:06.458 }, 00:09:06.458 { 00:09:06.458 "name": "BaseBdev3", 00:09:06.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.458 "is_configured": false, 00:09:06.458 "data_offset": 0, 00:09:06.458 "data_size": 0 00:09:06.458 } 00:09:06.458 ] 00:09:06.458 }' 00:09:06.458 06:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:06.458 06:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.394 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:07.394 [2024-08-14 06:40:34.550520] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.394 [2024-08-14 06:40:34.550617] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:07.394 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:07.653 [2024-08-14 06:40:34.826207] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.653 [2024-08-14 06:40:34.828440] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.653 [2024-08-14 06:40:34.828504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.653 [2024-08-14 06:40:34.828518] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.653 [2024-08-14 06:40:34.828527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.653 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:07.653 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:07.653 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.653 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:07.653 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:07.653 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:07.653 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:07.653 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:07.653 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:07.653 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:07.653 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:07.653 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:07.653 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:07.653 06:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.912 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:07.912 "name": "Existed_Raid", 00:09:07.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.912 "strip_size_kb": 64, 00:09:07.912 "state": "configuring", 00:09:07.912 "raid_level": "raid0", 00:09:07.912 "superblock": false, 00:09:07.912 "num_base_bdevs": 3, 00:09:07.912 "num_base_bdevs_discovered": 1, 00:09:07.912 "num_base_bdevs_operational": 3, 00:09:07.912 "base_bdevs_list": [ 00:09:07.912 { 00:09:07.912 "name": "BaseBdev1", 00:09:07.912 "uuid": "686080aa-d7dc-4dc3-bafb-97c1c4487bbd", 00:09:07.912 "is_configured": true, 00:09:07.912 "data_offset": 0, 00:09:07.912 "data_size": 65536 00:09:07.912 }, 00:09:07.912 { 00:09:07.912 "name": "BaseBdev2", 00:09:07.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.912 "is_configured": false, 00:09:07.912 "data_offset": 0, 00:09:07.912 "data_size": 0 00:09:07.912 }, 00:09:07.912 { 00:09:07.912 "name": "BaseBdev3", 00:09:07.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.912 "is_configured": false, 00:09:07.912 "data_offset": 0, 00:09:07.912 "data_size": 0 00:09:07.912 } 00:09:07.912 ] 00:09:07.912 }' 00:09:07.912 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:07.912 06:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.850 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.850 [2024-08-14 06:40:36.021030] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.850 BaseBdev2 00:09:08.850 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:08.850 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:09:08.850 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:08.850 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:09:08.850 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:08.850 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:08.850 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:09.109 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.368 [ 00:09:09.368 { 00:09:09.368 "name": "BaseBdev2", 00:09:09.368 "aliases": [ 00:09:09.368 "c7833bd6-70a8-4576-a1a0-f0cc4aef54ad" 00:09:09.368 ], 00:09:09.368 "product_name": "Malloc disk", 00:09:09.368 "block_size": 512, 00:09:09.368 "num_blocks": 65536, 00:09:09.368 "uuid": "c7833bd6-70a8-4576-a1a0-f0cc4aef54ad", 00:09:09.368 "assigned_rate_limits": { 00:09:09.368 "rw_ios_per_sec": 0, 00:09:09.368 "rw_mbytes_per_sec": 0, 00:09:09.368 "r_mbytes_per_sec": 0, 00:09:09.368 "w_mbytes_per_sec": 0 00:09:09.368 }, 00:09:09.368 "claimed": true, 00:09:09.368 "claim_type": "exclusive_write", 00:09:09.368 "zoned": false, 00:09:09.368 "supported_io_types": { 00:09:09.368 "read": true, 00:09:09.368 "write": true, 00:09:09.368 "unmap": true, 00:09:09.368 "flush": true, 00:09:09.368 "reset": true, 00:09:09.368 "nvme_admin": false, 00:09:09.368 "nvme_io": false, 00:09:09.368 "nvme_io_md": false, 00:09:09.368 "write_zeroes": true, 00:09:09.368 "zcopy": true, 00:09:09.368 "get_zone_info": false, 00:09:09.368 "zone_management": false, 00:09:09.368 "zone_append": false, 00:09:09.368 "compare": false, 00:09:09.368 "compare_and_write": false, 00:09:09.368 "abort": true, 00:09:09.368 "seek_hole": false, 00:09:09.368 "seek_data": false, 00:09:09.368 "copy": true, 00:09:09.368 "nvme_iov_md": false 00:09:09.368 }, 00:09:09.368 "memory_domains": [ 00:09:09.368 { 00:09:09.368 "dma_device_id": "system", 00:09:09.368 "dma_device_type": 1 00:09:09.368 }, 00:09:09.368 { 00:09:09.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.368 "dma_device_type": 2 00:09:09.368 } 00:09:09.368 ], 00:09:09.368 "driver_specific": {} 00:09:09.368 } 00:09:09.368 ] 00:09:09.368 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:09:09.368 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:09.368 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:09.368 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.368 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:09.368 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:09.368 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:09.368 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:09.368 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:09.368 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:09.368 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:09.368 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:09.368 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:09.658 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:09.658 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.920 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:09.920 "name": "Existed_Raid", 00:09:09.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.920 "strip_size_kb": 64, 00:09:09.920 "state": "configuring", 00:09:09.920 "raid_level": "raid0", 00:09:09.920 "superblock": false, 00:09:09.920 "num_base_bdevs": 3, 00:09:09.920 "num_base_bdevs_discovered": 2, 00:09:09.920 "num_base_bdevs_operational": 3, 00:09:09.920 "base_bdevs_list": [ 00:09:09.920 { 00:09:09.920 "name": "BaseBdev1", 00:09:09.920 "uuid": "686080aa-d7dc-4dc3-bafb-97c1c4487bbd", 00:09:09.920 "is_configured": true, 00:09:09.920 "data_offset": 0, 00:09:09.920 "data_size": 65536 00:09:09.920 }, 00:09:09.920 { 00:09:09.920 "name": "BaseBdev2", 00:09:09.920 "uuid": "c7833bd6-70a8-4576-a1a0-f0cc4aef54ad", 00:09:09.920 "is_configured": true, 00:09:09.920 "data_offset": 0, 00:09:09.920 "data_size": 65536 00:09:09.920 }, 00:09:09.920 { 00:09:09.920 "name": "BaseBdev3", 00:09:09.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.920 "is_configured": false, 00:09:09.920 "data_offset": 0, 00:09:09.920 "data_size": 0 00:09:09.920 } 00:09:09.920 ] 00:09:09.920 }' 00:09:09.920 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:09.920 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.486 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.745 [2024-08-14 06:40:37.890096] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.745 [2024-08-14 06:40:37.890276] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:10.745 [2024-08-14 06:40:37.890316] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:10.745 [2024-08-14 06:40:37.890688] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:10.745 [2024-08-14 06:40:37.890847] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:10.745 [2024-08-14 06:40:37.890865] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:10.745 [2024-08-14 06:40:37.891110] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.745 BaseBdev3 00:09:10.745 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:10.745 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:09:10.745 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:10.745 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:09:10.745 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:10.745 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:10.745 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:11.004 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:11.263 [ 00:09:11.263 { 00:09:11.263 "name": "BaseBdev3", 00:09:11.263 "aliases": [ 00:09:11.263 "606e765e-1fd4-4384-b228-4e0530118514" 00:09:11.263 ], 00:09:11.263 "product_name": "Malloc disk", 00:09:11.263 "block_size": 512, 00:09:11.263 "num_blocks": 65536, 00:09:11.263 "uuid": "606e765e-1fd4-4384-b228-4e0530118514", 00:09:11.263 "assigned_rate_limits": { 00:09:11.263 "rw_ios_per_sec": 0, 00:09:11.263 "rw_mbytes_per_sec": 0, 00:09:11.263 "r_mbytes_per_sec": 0, 00:09:11.263 "w_mbytes_per_sec": 0 00:09:11.263 }, 00:09:11.263 "claimed": true, 00:09:11.263 "claim_type": "exclusive_write", 00:09:11.263 "zoned": false, 00:09:11.263 "supported_io_types": { 00:09:11.263 "read": true, 00:09:11.263 "write": true, 00:09:11.263 "unmap": true, 00:09:11.263 "flush": true, 00:09:11.263 "reset": true, 00:09:11.263 "nvme_admin": false, 00:09:11.263 "nvme_io": false, 00:09:11.263 "nvme_io_md": false, 00:09:11.263 "write_zeroes": true, 00:09:11.263 "zcopy": true, 00:09:11.263 "get_zone_info": false, 00:09:11.263 "zone_management": false, 00:09:11.263 "zone_append": false, 00:09:11.263 "compare": false, 00:09:11.263 "compare_and_write": false, 00:09:11.263 "abort": true, 00:09:11.263 "seek_hole": false, 00:09:11.263 "seek_data": false, 00:09:11.263 "copy": true, 00:09:11.263 "nvme_iov_md": false 00:09:11.263 }, 00:09:11.263 "memory_domains": [ 00:09:11.263 { 00:09:11.263 "dma_device_id": "system", 00:09:11.263 "dma_device_type": 1 00:09:11.263 }, 00:09:11.263 { 00:09:11.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.263 "dma_device_type": 2 00:09:11.263 } 00:09:11.263 ], 00:09:11.263 "driver_specific": {} 00:09:11.263 } 00:09:11.263 ] 00:09:11.263 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:09:11.263 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:11.263 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:11.263 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:11.263 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:11.263 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:11.263 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:11.263 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:11.263 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:11.263 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:11.263 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:11.263 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:11.263 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:11.263 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.263 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.522 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:11.522 "name": "Existed_Raid", 00:09:11.522 "uuid": "fd8c69ff-149b-4782-af5e-d24a38f8a5d8", 00:09:11.522 "strip_size_kb": 64, 00:09:11.522 "state": "online", 00:09:11.522 "raid_level": "raid0", 00:09:11.522 "superblock": false, 00:09:11.522 "num_base_bdevs": 3, 00:09:11.522 "num_base_bdevs_discovered": 3, 00:09:11.522 "num_base_bdevs_operational": 3, 00:09:11.522 "base_bdevs_list": [ 00:09:11.522 { 00:09:11.522 "name": "BaseBdev1", 00:09:11.522 "uuid": "686080aa-d7dc-4dc3-bafb-97c1c4487bbd", 00:09:11.522 "is_configured": true, 00:09:11.522 "data_offset": 0, 00:09:11.522 "data_size": 65536 00:09:11.522 }, 00:09:11.522 { 00:09:11.522 "name": "BaseBdev2", 00:09:11.522 "uuid": "c7833bd6-70a8-4576-a1a0-f0cc4aef54ad", 00:09:11.522 "is_configured": true, 00:09:11.522 "data_offset": 0, 00:09:11.522 "data_size": 65536 00:09:11.522 }, 00:09:11.522 { 00:09:11.522 "name": "BaseBdev3", 00:09:11.522 "uuid": "606e765e-1fd4-4384-b228-4e0530118514", 00:09:11.522 "is_configured": true, 00:09:11.522 "data_offset": 0, 00:09:11.522 "data_size": 65536 00:09:11.522 } 00:09:11.522 ] 00:09:11.522 }' 00:09:11.522 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:11.522 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.458 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:12.458 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:12.458 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:12.458 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:12.458 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:12.458 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:12.458 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:12.458 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:12.458 [2024-08-14 06:40:39.651941] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.458 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:12.458 "name": "Existed_Raid", 00:09:12.458 "aliases": [ 00:09:12.458 "fd8c69ff-149b-4782-af5e-d24a38f8a5d8" 00:09:12.458 ], 00:09:12.458 "product_name": "Raid Volume", 00:09:12.458 "block_size": 512, 00:09:12.458 "num_blocks": 196608, 00:09:12.458 "uuid": "fd8c69ff-149b-4782-af5e-d24a38f8a5d8", 00:09:12.458 "assigned_rate_limits": { 00:09:12.458 "rw_ios_per_sec": 0, 00:09:12.458 "rw_mbytes_per_sec": 0, 00:09:12.458 "r_mbytes_per_sec": 0, 00:09:12.458 "w_mbytes_per_sec": 0 00:09:12.458 }, 00:09:12.458 "claimed": false, 00:09:12.458 "zoned": false, 00:09:12.458 "supported_io_types": { 00:09:12.458 "read": true, 00:09:12.458 "write": true, 00:09:12.458 "unmap": true, 00:09:12.458 "flush": true, 00:09:12.458 "reset": true, 00:09:12.458 "nvme_admin": false, 00:09:12.458 "nvme_io": false, 00:09:12.458 "nvme_io_md": false, 00:09:12.458 "write_zeroes": true, 00:09:12.458 "zcopy": false, 00:09:12.458 "get_zone_info": false, 00:09:12.458 "zone_management": false, 00:09:12.458 "zone_append": false, 00:09:12.458 "compare": false, 00:09:12.458 "compare_and_write": false, 00:09:12.458 "abort": false, 00:09:12.458 "seek_hole": false, 00:09:12.458 "seek_data": false, 00:09:12.458 "copy": false, 00:09:12.458 "nvme_iov_md": false 00:09:12.458 }, 00:09:12.458 "memory_domains": [ 00:09:12.458 { 00:09:12.458 "dma_device_id": "system", 00:09:12.458 "dma_device_type": 1 00:09:12.458 }, 00:09:12.458 { 00:09:12.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.458 "dma_device_type": 2 00:09:12.458 }, 00:09:12.458 { 00:09:12.458 "dma_device_id": "system", 00:09:12.458 "dma_device_type": 1 00:09:12.458 }, 00:09:12.458 { 00:09:12.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.458 "dma_device_type": 2 00:09:12.458 }, 00:09:12.458 { 00:09:12.458 "dma_device_id": "system", 00:09:12.458 "dma_device_type": 1 00:09:12.458 }, 00:09:12.458 { 00:09:12.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.458 "dma_device_type": 2 00:09:12.458 } 00:09:12.458 ], 00:09:12.458 "driver_specific": { 00:09:12.458 "raid": { 00:09:12.458 "uuid": "fd8c69ff-149b-4782-af5e-d24a38f8a5d8", 00:09:12.458 "strip_size_kb": 64, 00:09:12.458 "state": "online", 00:09:12.458 "raid_level": "raid0", 00:09:12.458 "superblock": false, 00:09:12.458 "num_base_bdevs": 3, 00:09:12.458 "num_base_bdevs_discovered": 3, 00:09:12.458 "num_base_bdevs_operational": 3, 00:09:12.458 "base_bdevs_list": [ 00:09:12.458 { 00:09:12.458 "name": "BaseBdev1", 00:09:12.458 "uuid": "686080aa-d7dc-4dc3-bafb-97c1c4487bbd", 00:09:12.458 "is_configured": true, 00:09:12.458 "data_offset": 0, 00:09:12.458 "data_size": 65536 00:09:12.458 }, 00:09:12.458 { 00:09:12.458 "name": "BaseBdev2", 00:09:12.458 "uuid": "c7833bd6-70a8-4576-a1a0-f0cc4aef54ad", 00:09:12.458 "is_configured": true, 00:09:12.458 "data_offset": 0, 00:09:12.458 "data_size": 65536 00:09:12.458 }, 00:09:12.458 { 00:09:12.458 "name": "BaseBdev3", 00:09:12.458 "uuid": "606e765e-1fd4-4384-b228-4e0530118514", 00:09:12.458 "is_configured": true, 00:09:12.458 "data_offset": 0, 00:09:12.458 "data_size": 65536 00:09:12.458 } 00:09:12.458 ] 00:09:12.458 } 00:09:12.458 } 00:09:12.458 }' 00:09:12.458 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.717 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:12.717 BaseBdev2 00:09:12.717 BaseBdev3' 00:09:12.717 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:12.717 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:12.717 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:12.976 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:12.976 "name": "BaseBdev1", 00:09:12.976 "aliases": [ 00:09:12.976 "686080aa-d7dc-4dc3-bafb-97c1c4487bbd" 00:09:12.976 ], 00:09:12.976 "product_name": "Malloc disk", 00:09:12.976 "block_size": 512, 00:09:12.976 "num_blocks": 65536, 00:09:12.977 "uuid": "686080aa-d7dc-4dc3-bafb-97c1c4487bbd", 00:09:12.977 "assigned_rate_limits": { 00:09:12.977 "rw_ios_per_sec": 0, 00:09:12.977 "rw_mbytes_per_sec": 0, 00:09:12.977 "r_mbytes_per_sec": 0, 00:09:12.977 "w_mbytes_per_sec": 0 00:09:12.977 }, 00:09:12.977 "claimed": true, 00:09:12.977 "claim_type": "exclusive_write", 00:09:12.977 "zoned": false, 00:09:12.977 "supported_io_types": { 00:09:12.977 "read": true, 00:09:12.977 "write": true, 00:09:12.977 "unmap": true, 00:09:12.977 "flush": true, 00:09:12.977 "reset": true, 00:09:12.977 "nvme_admin": false, 00:09:12.977 "nvme_io": false, 00:09:12.977 "nvme_io_md": false, 00:09:12.977 "write_zeroes": true, 00:09:12.977 "zcopy": true, 00:09:12.977 "get_zone_info": false, 00:09:12.977 "zone_management": false, 00:09:12.977 "zone_append": false, 00:09:12.977 "compare": false, 00:09:12.977 "compare_and_write": false, 00:09:12.977 "abort": true, 00:09:12.977 "seek_hole": false, 00:09:12.977 "seek_data": false, 00:09:12.977 "copy": true, 00:09:12.977 "nvme_iov_md": false 00:09:12.977 }, 00:09:12.977 "memory_domains": [ 00:09:12.977 { 00:09:12.977 "dma_device_id": "system", 00:09:12.977 "dma_device_type": 1 00:09:12.977 }, 00:09:12.977 { 00:09:12.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.977 "dma_device_type": 2 00:09:12.977 } 00:09:12.977 ], 00:09:12.977 "driver_specific": {} 00:09:12.977 }' 00:09:12.977 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:12.977 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:12.977 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:12.977 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:12.977 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:12.977 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:12.977 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:13.235 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:13.235 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:13.235 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:13.235 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:13.236 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:13.236 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:13.236 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:13.236 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:13.494 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:13.494 "name": "BaseBdev2", 00:09:13.494 "aliases": [ 00:09:13.494 "c7833bd6-70a8-4576-a1a0-f0cc4aef54ad" 00:09:13.494 ], 00:09:13.494 "product_name": "Malloc disk", 00:09:13.494 "block_size": 512, 00:09:13.494 "num_blocks": 65536, 00:09:13.494 "uuid": "c7833bd6-70a8-4576-a1a0-f0cc4aef54ad", 00:09:13.494 "assigned_rate_limits": { 00:09:13.494 "rw_ios_per_sec": 0, 00:09:13.494 "rw_mbytes_per_sec": 0, 00:09:13.494 "r_mbytes_per_sec": 0, 00:09:13.494 "w_mbytes_per_sec": 0 00:09:13.494 }, 00:09:13.494 "claimed": true, 00:09:13.494 "claim_type": "exclusive_write", 00:09:13.494 "zoned": false, 00:09:13.494 "supported_io_types": { 00:09:13.494 "read": true, 00:09:13.494 "write": true, 00:09:13.494 "unmap": true, 00:09:13.494 "flush": true, 00:09:13.494 "reset": true, 00:09:13.494 "nvme_admin": false, 00:09:13.494 "nvme_io": false, 00:09:13.494 "nvme_io_md": false, 00:09:13.494 "write_zeroes": true, 00:09:13.494 "zcopy": true, 00:09:13.494 "get_zone_info": false, 00:09:13.494 "zone_management": false, 00:09:13.494 "zone_append": false, 00:09:13.494 "compare": false, 00:09:13.494 "compare_and_write": false, 00:09:13.494 "abort": true, 00:09:13.494 "seek_hole": false, 00:09:13.494 "seek_data": false, 00:09:13.494 "copy": true, 00:09:13.494 "nvme_iov_md": false 00:09:13.494 }, 00:09:13.494 "memory_domains": [ 00:09:13.494 { 00:09:13.494 "dma_device_id": "system", 00:09:13.494 "dma_device_type": 1 00:09:13.494 }, 00:09:13.494 { 00:09:13.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.495 "dma_device_type": 2 00:09:13.495 } 00:09:13.495 ], 00:09:13.495 "driver_specific": {} 00:09:13.495 }' 00:09:13.495 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:13.495 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:13.753 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:13.753 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:13.753 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:13.753 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:13.753 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:13.753 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:13.753 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:13.753 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:14.012 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:14.012 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:14.012 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:14.012 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:14.012 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:14.272 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:14.272 "name": "BaseBdev3", 00:09:14.272 "aliases": [ 00:09:14.272 "606e765e-1fd4-4384-b228-4e0530118514" 00:09:14.272 ], 00:09:14.272 "product_name": "Malloc disk", 00:09:14.272 "block_size": 512, 00:09:14.272 "num_blocks": 65536, 00:09:14.272 "uuid": "606e765e-1fd4-4384-b228-4e0530118514", 00:09:14.272 "assigned_rate_limits": { 00:09:14.272 "rw_ios_per_sec": 0, 00:09:14.272 "rw_mbytes_per_sec": 0, 00:09:14.272 "r_mbytes_per_sec": 0, 00:09:14.272 "w_mbytes_per_sec": 0 00:09:14.272 }, 00:09:14.272 "claimed": true, 00:09:14.272 "claim_type": "exclusive_write", 00:09:14.272 "zoned": false, 00:09:14.272 "supported_io_types": { 00:09:14.272 "read": true, 00:09:14.272 "write": true, 00:09:14.272 "unmap": true, 00:09:14.272 "flush": true, 00:09:14.272 "reset": true, 00:09:14.272 "nvme_admin": false, 00:09:14.272 "nvme_io": false, 00:09:14.272 "nvme_io_md": false, 00:09:14.272 "write_zeroes": true, 00:09:14.272 "zcopy": true, 00:09:14.272 "get_zone_info": false, 00:09:14.272 "zone_management": false, 00:09:14.272 "zone_append": false, 00:09:14.272 "compare": false, 00:09:14.272 "compare_and_write": false, 00:09:14.272 "abort": true, 00:09:14.272 "seek_hole": false, 00:09:14.272 "seek_data": false, 00:09:14.272 "copy": true, 00:09:14.272 "nvme_iov_md": false 00:09:14.272 }, 00:09:14.272 "memory_domains": [ 00:09:14.272 { 00:09:14.272 "dma_device_id": "system", 00:09:14.272 "dma_device_type": 1 00:09:14.272 }, 00:09:14.272 { 00:09:14.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.272 "dma_device_type": 2 00:09:14.272 } 00:09:14.272 ], 00:09:14.272 "driver_specific": {} 00:09:14.272 }' 00:09:14.272 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:14.272 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:14.272 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:14.272 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:14.272 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:14.531 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:14.531 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:14.531 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:14.531 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:14.531 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:14.531 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:14.531 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:14.531 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:14.789 [2024-08-14 06:40:42.003839] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:14.789 [2024-08-14 06:40:42.003898] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.789 [2024-08-14 06:40:42.003959] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:15.047 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.306 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:15.306 "name": "Existed_Raid", 00:09:15.306 "uuid": "fd8c69ff-149b-4782-af5e-d24a38f8a5d8", 00:09:15.306 "strip_size_kb": 64, 00:09:15.306 "state": "offline", 00:09:15.306 "raid_level": "raid0", 00:09:15.306 "superblock": false, 00:09:15.306 "num_base_bdevs": 3, 00:09:15.306 "num_base_bdevs_discovered": 2, 00:09:15.306 "num_base_bdevs_operational": 2, 00:09:15.306 "base_bdevs_list": [ 00:09:15.306 { 00:09:15.306 "name": null, 00:09:15.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.306 "is_configured": false, 00:09:15.306 "data_offset": 0, 00:09:15.306 "data_size": 65536 00:09:15.306 }, 00:09:15.306 { 00:09:15.306 "name": "BaseBdev2", 00:09:15.306 "uuid": "c7833bd6-70a8-4576-a1a0-f0cc4aef54ad", 00:09:15.306 "is_configured": true, 00:09:15.306 "data_offset": 0, 00:09:15.306 "data_size": 65536 00:09:15.306 }, 00:09:15.306 { 00:09:15.306 "name": "BaseBdev3", 00:09:15.306 "uuid": "606e765e-1fd4-4384-b228-4e0530118514", 00:09:15.306 "is_configured": true, 00:09:15.306 "data_offset": 0, 00:09:15.306 "data_size": 65536 00:09:15.306 } 00:09:15.306 ] 00:09:15.306 }' 00:09:15.306 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:15.306 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.873 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:15.874 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:15.874 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:15.874 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:16.133 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:16.133 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:16.133 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:16.392 [2024-08-14 06:40:43.509498] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:16.392 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:16.392 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:16.392 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:16.392 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:16.651 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:16.651 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:16.651 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:16.910 [2024-08-14 06:40:44.060825] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:16.910 [2024-08-14 06:40:44.061005] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:16.910 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:16.910 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:16.910 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:16.910 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:17.169 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:17.169 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:17.169 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:09:17.169 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:09:17.169 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:17.169 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:17.435 BaseBdev2 00:09:17.435 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:09:17.435 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:09:17.435 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:17.435 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:09:17.435 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:17.435 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:17.435 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:17.705 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.965 [ 00:09:17.965 { 00:09:17.965 "name": "BaseBdev2", 00:09:17.965 "aliases": [ 00:09:17.965 "a1aec7b7-be42-4f76-80fa-9ed0a6c750ac" 00:09:17.965 ], 00:09:17.965 "product_name": "Malloc disk", 00:09:17.965 "block_size": 512, 00:09:17.965 "num_blocks": 65536, 00:09:17.965 "uuid": "a1aec7b7-be42-4f76-80fa-9ed0a6c750ac", 00:09:17.965 "assigned_rate_limits": { 00:09:17.965 "rw_ios_per_sec": 0, 00:09:17.965 "rw_mbytes_per_sec": 0, 00:09:17.965 "r_mbytes_per_sec": 0, 00:09:17.965 "w_mbytes_per_sec": 0 00:09:17.965 }, 00:09:17.965 "claimed": false, 00:09:17.965 "zoned": false, 00:09:17.965 "supported_io_types": { 00:09:17.965 "read": true, 00:09:17.965 "write": true, 00:09:17.965 "unmap": true, 00:09:17.965 "flush": true, 00:09:17.965 "reset": true, 00:09:17.965 "nvme_admin": false, 00:09:17.965 "nvme_io": false, 00:09:17.965 "nvme_io_md": false, 00:09:17.965 "write_zeroes": true, 00:09:17.965 "zcopy": true, 00:09:17.965 "get_zone_info": false, 00:09:17.965 "zone_management": false, 00:09:17.965 "zone_append": false, 00:09:17.965 "compare": false, 00:09:17.965 "compare_and_write": false, 00:09:17.965 "abort": true, 00:09:17.965 "seek_hole": false, 00:09:17.965 "seek_data": false, 00:09:17.965 "copy": true, 00:09:17.965 "nvme_iov_md": false 00:09:17.965 }, 00:09:17.965 "memory_domains": [ 00:09:17.965 { 00:09:17.965 "dma_device_id": "system", 00:09:17.965 "dma_device_type": 1 00:09:17.965 }, 00:09:17.965 { 00:09:17.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.965 "dma_device_type": 2 00:09:17.965 } 00:09:17.965 ], 00:09:17.965 "driver_specific": {} 00:09:17.965 } 00:09:17.965 ] 00:09:17.965 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:09:17.965 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:17.965 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:17.965 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:18.224 BaseBdev3 00:09:18.224 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:09:18.224 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:09:18.224 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:18.224 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:09:18.224 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:18.224 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:18.224 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:18.484 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:18.742 [ 00:09:18.742 { 00:09:18.742 "name": "BaseBdev3", 00:09:18.742 "aliases": [ 00:09:18.742 "26e01973-ad29-4859-8e6e-b8f2646394c1" 00:09:18.742 ], 00:09:18.742 "product_name": "Malloc disk", 00:09:18.742 "block_size": 512, 00:09:18.742 "num_blocks": 65536, 00:09:18.742 "uuid": "26e01973-ad29-4859-8e6e-b8f2646394c1", 00:09:18.742 "assigned_rate_limits": { 00:09:18.742 "rw_ios_per_sec": 0, 00:09:18.742 "rw_mbytes_per_sec": 0, 00:09:18.742 "r_mbytes_per_sec": 0, 00:09:18.742 "w_mbytes_per_sec": 0 00:09:18.742 }, 00:09:18.742 "claimed": false, 00:09:18.742 "zoned": false, 00:09:18.742 "supported_io_types": { 00:09:18.742 "read": true, 00:09:18.742 "write": true, 00:09:18.742 "unmap": true, 00:09:18.742 "flush": true, 00:09:18.742 "reset": true, 00:09:18.742 "nvme_admin": false, 00:09:18.742 "nvme_io": false, 00:09:18.742 "nvme_io_md": false, 00:09:18.742 "write_zeroes": true, 00:09:18.742 "zcopy": true, 00:09:18.742 "get_zone_info": false, 00:09:18.742 "zone_management": false, 00:09:18.742 "zone_append": false, 00:09:18.742 "compare": false, 00:09:18.742 "compare_and_write": false, 00:09:18.742 "abort": true, 00:09:18.742 "seek_hole": false, 00:09:18.742 "seek_data": false, 00:09:18.742 "copy": true, 00:09:18.742 "nvme_iov_md": false 00:09:18.742 }, 00:09:18.742 "memory_domains": [ 00:09:18.742 { 00:09:18.742 "dma_device_id": "system", 00:09:18.742 "dma_device_type": 1 00:09:18.742 }, 00:09:18.742 { 00:09:18.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.742 "dma_device_type": 2 00:09:18.742 } 00:09:18.742 ], 00:09:18.742 "driver_specific": {} 00:09:18.742 } 00:09:18.742 ] 00:09:18.742 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:09:18.742 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:18.742 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:18.742 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:19.001 [2024-08-14 06:40:46.233714] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.001 [2024-08-14 06:40:46.233899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.001 [2024-08-14 06:40:46.233957] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.001 [2024-08-14 06:40:46.236130] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.260 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.260 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:19.260 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:19.260 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:19.260 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:19.260 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:19.260 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:19.260 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:19.260 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:19.260 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:19.260 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:19.260 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.520 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:19.520 "name": "Existed_Raid", 00:09:19.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.520 "strip_size_kb": 64, 00:09:19.520 "state": "configuring", 00:09:19.520 "raid_level": "raid0", 00:09:19.520 "superblock": false, 00:09:19.520 "num_base_bdevs": 3, 00:09:19.520 "num_base_bdevs_discovered": 2, 00:09:19.520 "num_base_bdevs_operational": 3, 00:09:19.520 "base_bdevs_list": [ 00:09:19.520 { 00:09:19.520 "name": "BaseBdev1", 00:09:19.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.520 "is_configured": false, 00:09:19.520 "data_offset": 0, 00:09:19.520 "data_size": 0 00:09:19.520 }, 00:09:19.520 { 00:09:19.520 "name": "BaseBdev2", 00:09:19.520 "uuid": "a1aec7b7-be42-4f76-80fa-9ed0a6c750ac", 00:09:19.520 "is_configured": true, 00:09:19.520 "data_offset": 0, 00:09:19.520 "data_size": 65536 00:09:19.520 }, 00:09:19.520 { 00:09:19.520 "name": "BaseBdev3", 00:09:19.520 "uuid": "26e01973-ad29-4859-8e6e-b8f2646394c1", 00:09:19.520 "is_configured": true, 00:09:19.520 "data_offset": 0, 00:09:19.520 "data_size": 65536 00:09:19.520 } 00:09:19.520 ] 00:09:19.520 }' 00:09:19.520 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:19.520 06:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.088 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:09:20.348 [2024-08-14 06:40:47.419759] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:20.348 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:20.348 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:20.348 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:20.348 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:20.348 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:20.348 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:20.348 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:20.348 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:20.348 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:20.348 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:20.348 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.348 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.610 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:20.610 "name": "Existed_Raid", 00:09:20.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.610 "strip_size_kb": 64, 00:09:20.610 "state": "configuring", 00:09:20.610 "raid_level": "raid0", 00:09:20.610 "superblock": false, 00:09:20.610 "num_base_bdevs": 3, 00:09:20.610 "num_base_bdevs_discovered": 1, 00:09:20.610 "num_base_bdevs_operational": 3, 00:09:20.610 "base_bdevs_list": [ 00:09:20.610 { 00:09:20.610 "name": "BaseBdev1", 00:09:20.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.610 "is_configured": false, 00:09:20.610 "data_offset": 0, 00:09:20.610 "data_size": 0 00:09:20.610 }, 00:09:20.610 { 00:09:20.610 "name": null, 00:09:20.610 "uuid": "a1aec7b7-be42-4f76-80fa-9ed0a6c750ac", 00:09:20.610 "is_configured": false, 00:09:20.610 "data_offset": 0, 00:09:20.610 "data_size": 65536 00:09:20.610 }, 00:09:20.610 { 00:09:20.610 "name": "BaseBdev3", 00:09:20.610 "uuid": "26e01973-ad29-4859-8e6e-b8f2646394c1", 00:09:20.610 "is_configured": true, 00:09:20.610 "data_offset": 0, 00:09:20.610 "data_size": 65536 00:09:20.610 } 00:09:20.610 ] 00:09:20.610 }' 00:09:20.610 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:20.610 06:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.178 06:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.178 06:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:21.436 06:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:09:21.436 06:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:21.695 [2024-08-14 06:40:48.884871] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.695 BaseBdev1 00:09:21.695 06:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:09:21.695 06:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:09:21.695 06:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:21.695 06:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:09:21.695 06:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:21.695 06:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:21.695 06:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:21.954 06:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:22.214 [ 00:09:22.214 { 00:09:22.214 "name": "BaseBdev1", 00:09:22.214 "aliases": [ 00:09:22.214 "fac477ff-4f36-4418-97b7-09e761733777" 00:09:22.214 ], 00:09:22.214 "product_name": "Malloc disk", 00:09:22.214 "block_size": 512, 00:09:22.214 "num_blocks": 65536, 00:09:22.214 "uuid": "fac477ff-4f36-4418-97b7-09e761733777", 00:09:22.214 "assigned_rate_limits": { 00:09:22.214 "rw_ios_per_sec": 0, 00:09:22.214 "rw_mbytes_per_sec": 0, 00:09:22.214 "r_mbytes_per_sec": 0, 00:09:22.214 "w_mbytes_per_sec": 0 00:09:22.214 }, 00:09:22.214 "claimed": true, 00:09:22.214 "claim_type": "exclusive_write", 00:09:22.214 "zoned": false, 00:09:22.214 "supported_io_types": { 00:09:22.214 "read": true, 00:09:22.214 "write": true, 00:09:22.214 "unmap": true, 00:09:22.214 "flush": true, 00:09:22.214 "reset": true, 00:09:22.214 "nvme_admin": false, 00:09:22.214 "nvme_io": false, 00:09:22.214 "nvme_io_md": false, 00:09:22.214 "write_zeroes": true, 00:09:22.214 "zcopy": true, 00:09:22.214 "get_zone_info": false, 00:09:22.214 "zone_management": false, 00:09:22.214 "zone_append": false, 00:09:22.214 "compare": false, 00:09:22.214 "compare_and_write": false, 00:09:22.214 "abort": true, 00:09:22.214 "seek_hole": false, 00:09:22.214 "seek_data": false, 00:09:22.214 "copy": true, 00:09:22.214 "nvme_iov_md": false 00:09:22.214 }, 00:09:22.214 "memory_domains": [ 00:09:22.214 { 00:09:22.214 "dma_device_id": "system", 00:09:22.214 "dma_device_type": 1 00:09:22.214 }, 00:09:22.214 { 00:09:22.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.214 "dma_device_type": 2 00:09:22.214 } 00:09:22.214 ], 00:09:22.214 "driver_specific": {} 00:09:22.214 } 00:09:22.214 ] 00:09:22.214 06:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:09:22.214 06:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:22.214 06:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:22.214 06:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:22.214 06:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:22.214 06:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:22.214 06:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:22.214 06:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:22.214 06:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:22.214 06:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:22.214 06:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:22.214 06:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:22.214 06:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.783 06:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:22.783 "name": "Existed_Raid", 00:09:22.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.783 "strip_size_kb": 64, 00:09:22.783 "state": "configuring", 00:09:22.783 "raid_level": "raid0", 00:09:22.783 "superblock": false, 00:09:22.783 "num_base_bdevs": 3, 00:09:22.783 "num_base_bdevs_discovered": 2, 00:09:22.783 "num_base_bdevs_operational": 3, 00:09:22.783 "base_bdevs_list": [ 00:09:22.783 { 00:09:22.783 "name": "BaseBdev1", 00:09:22.783 "uuid": "fac477ff-4f36-4418-97b7-09e761733777", 00:09:22.783 "is_configured": true, 00:09:22.783 "data_offset": 0, 00:09:22.783 "data_size": 65536 00:09:22.783 }, 00:09:22.783 { 00:09:22.783 "name": null, 00:09:22.783 "uuid": "a1aec7b7-be42-4f76-80fa-9ed0a6c750ac", 00:09:22.783 "is_configured": false, 00:09:22.783 "data_offset": 0, 00:09:22.783 "data_size": 65536 00:09:22.783 }, 00:09:22.783 { 00:09:22.783 "name": "BaseBdev3", 00:09:22.783 "uuid": "26e01973-ad29-4859-8e6e-b8f2646394c1", 00:09:22.783 "is_configured": true, 00:09:22.783 "data_offset": 0, 00:09:22.783 "data_size": 65536 00:09:22.783 } 00:09:22.783 ] 00:09:22.783 }' 00:09:22.783 06:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:22.783 06:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.350 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:23.350 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:23.609 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:23.609 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:23.868 [2024-08-14 06:40:50.902920] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:23.868 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:23.868 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:23.868 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:23.868 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:23.868 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:23.868 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:23.868 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:23.868 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:23.868 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:23.868 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:23.868 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.868 06:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.127 06:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:24.127 "name": "Existed_Raid", 00:09:24.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.127 "strip_size_kb": 64, 00:09:24.127 "state": "configuring", 00:09:24.127 "raid_level": "raid0", 00:09:24.127 "superblock": false, 00:09:24.127 "num_base_bdevs": 3, 00:09:24.127 "num_base_bdevs_discovered": 1, 00:09:24.127 "num_base_bdevs_operational": 3, 00:09:24.127 "base_bdevs_list": [ 00:09:24.127 { 00:09:24.127 "name": "BaseBdev1", 00:09:24.127 "uuid": "fac477ff-4f36-4418-97b7-09e761733777", 00:09:24.127 "is_configured": true, 00:09:24.127 "data_offset": 0, 00:09:24.127 "data_size": 65536 00:09:24.127 }, 00:09:24.127 { 00:09:24.127 "name": null, 00:09:24.127 "uuid": "a1aec7b7-be42-4f76-80fa-9ed0a6c750ac", 00:09:24.127 "is_configured": false, 00:09:24.127 "data_offset": 0, 00:09:24.127 "data_size": 65536 00:09:24.127 }, 00:09:24.127 { 00:09:24.127 "name": null, 00:09:24.127 "uuid": "26e01973-ad29-4859-8e6e-b8f2646394c1", 00:09:24.127 "is_configured": false, 00:09:24.127 "data_offset": 0, 00:09:24.127 "data_size": 65536 00:09:24.127 } 00:09:24.127 ] 00:09:24.127 }' 00:09:24.127 06:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:24.127 06:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.695 06:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.695 06:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:24.954 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:24.954 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:25.213 [2024-08-14 06:40:52.337393] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.213 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:25.213 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:25.213 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:25.213 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:25.213 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:25.213 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:25.213 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:25.213 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:25.213 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:25.213 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:25.213 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:25.213 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.472 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:25.472 "name": "Existed_Raid", 00:09:25.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.472 "strip_size_kb": 64, 00:09:25.472 "state": "configuring", 00:09:25.472 "raid_level": "raid0", 00:09:25.472 "superblock": false, 00:09:25.472 "num_base_bdevs": 3, 00:09:25.472 "num_base_bdevs_discovered": 2, 00:09:25.472 "num_base_bdevs_operational": 3, 00:09:25.472 "base_bdevs_list": [ 00:09:25.472 { 00:09:25.472 "name": "BaseBdev1", 00:09:25.472 "uuid": "fac477ff-4f36-4418-97b7-09e761733777", 00:09:25.472 "is_configured": true, 00:09:25.472 "data_offset": 0, 00:09:25.472 "data_size": 65536 00:09:25.472 }, 00:09:25.472 { 00:09:25.472 "name": null, 00:09:25.472 "uuid": "a1aec7b7-be42-4f76-80fa-9ed0a6c750ac", 00:09:25.472 "is_configured": false, 00:09:25.472 "data_offset": 0, 00:09:25.472 "data_size": 65536 00:09:25.472 }, 00:09:25.472 { 00:09:25.472 "name": "BaseBdev3", 00:09:25.472 "uuid": "26e01973-ad29-4859-8e6e-b8f2646394c1", 00:09:25.472 "is_configured": true, 00:09:25.472 "data_offset": 0, 00:09:25.472 "data_size": 65536 00:09:25.472 } 00:09:25.472 ] 00:09:25.472 }' 00:09:25.472 06:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:25.473 06:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.437 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:26.437 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:26.437 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:26.437 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:26.695 [2024-08-14 06:40:53.828420] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:26.695 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:26.695 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:26.695 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:26.695 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:26.695 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:26.695 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:26.695 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:26.695 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:26.695 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:26.695 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:26.695 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:26.695 06:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.953 06:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:26.953 "name": "Existed_Raid", 00:09:26.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.953 "strip_size_kb": 64, 00:09:26.953 "state": "configuring", 00:09:26.953 "raid_level": "raid0", 00:09:26.953 "superblock": false, 00:09:26.953 "num_base_bdevs": 3, 00:09:26.953 "num_base_bdevs_discovered": 1, 00:09:26.953 "num_base_bdevs_operational": 3, 00:09:26.953 "base_bdevs_list": [ 00:09:26.953 { 00:09:26.953 "name": null, 00:09:26.953 "uuid": "fac477ff-4f36-4418-97b7-09e761733777", 00:09:26.953 "is_configured": false, 00:09:26.953 "data_offset": 0, 00:09:26.953 "data_size": 65536 00:09:26.953 }, 00:09:26.953 { 00:09:26.953 "name": null, 00:09:26.953 "uuid": "a1aec7b7-be42-4f76-80fa-9ed0a6c750ac", 00:09:26.953 "is_configured": false, 00:09:26.953 "data_offset": 0, 00:09:26.953 "data_size": 65536 00:09:26.953 }, 00:09:26.953 { 00:09:26.953 "name": "BaseBdev3", 00:09:26.953 "uuid": "26e01973-ad29-4859-8e6e-b8f2646394c1", 00:09:26.953 "is_configured": true, 00:09:26.953 "data_offset": 0, 00:09:26.953 "data_size": 65536 00:09:26.953 } 00:09:26.953 ] 00:09:26.953 }' 00:09:26.953 06:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:26.953 06:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.521 06:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.521 06:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:27.779 06:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:27.779 06:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:28.038 [2024-08-14 06:40:55.198041] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.038 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:28.038 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:28.038 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:28.038 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:28.038 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:28.038 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:28.038 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:28.038 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:28.038 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:28.038 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:28.038 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.038 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.297 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:28.297 "name": "Existed_Raid", 00:09:28.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.297 "strip_size_kb": 64, 00:09:28.297 "state": "configuring", 00:09:28.297 "raid_level": "raid0", 00:09:28.297 "superblock": false, 00:09:28.297 "num_base_bdevs": 3, 00:09:28.297 "num_base_bdevs_discovered": 2, 00:09:28.297 "num_base_bdevs_operational": 3, 00:09:28.297 "base_bdevs_list": [ 00:09:28.297 { 00:09:28.297 "name": null, 00:09:28.297 "uuid": "fac477ff-4f36-4418-97b7-09e761733777", 00:09:28.297 "is_configured": false, 00:09:28.297 "data_offset": 0, 00:09:28.297 "data_size": 65536 00:09:28.297 }, 00:09:28.297 { 00:09:28.297 "name": "BaseBdev2", 00:09:28.297 "uuid": "a1aec7b7-be42-4f76-80fa-9ed0a6c750ac", 00:09:28.297 "is_configured": true, 00:09:28.297 "data_offset": 0, 00:09:28.297 "data_size": 65536 00:09:28.297 }, 00:09:28.297 { 00:09:28.297 "name": "BaseBdev3", 00:09:28.297 "uuid": "26e01973-ad29-4859-8e6e-b8f2646394c1", 00:09:28.297 "is_configured": true, 00:09:28.297 "data_offset": 0, 00:09:28.297 "data_size": 65536 00:09:28.297 } 00:09:28.297 ] 00:09:28.297 }' 00:09:28.297 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:28.297 06:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.865 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.865 06:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:29.124 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:29.124 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:29.124 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:29.124 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u fac477ff-4f36-4418-97b7-09e761733777 00:09:29.382 [2024-08-14 06:40:56.535896] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:29.382 NewBaseBdev 00:09:29.382 [2024-08-14 06:40:56.536039] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:29.382 [2024-08-14 06:40:56.536052] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:29.383 [2024-08-14 06:40:56.536382] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:29.383 [2024-08-14 06:40:56.536528] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:29.383 [2024-08-14 06:40:56.536543] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:29.383 [2024-08-14 06:40:56.536741] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.383 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:29.383 06:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:09:29.383 06:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:29.383 06:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:09:29.383 06:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:29.383 06:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:29.383 06:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:29.642 06:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:29.901 [ 00:09:29.901 { 00:09:29.901 "name": "NewBaseBdev", 00:09:29.901 "aliases": [ 00:09:29.901 "fac477ff-4f36-4418-97b7-09e761733777" 00:09:29.901 ], 00:09:29.901 "product_name": "Malloc disk", 00:09:29.901 "block_size": 512, 00:09:29.901 "num_blocks": 65536, 00:09:29.901 "uuid": "fac477ff-4f36-4418-97b7-09e761733777", 00:09:29.901 "assigned_rate_limits": { 00:09:29.901 "rw_ios_per_sec": 0, 00:09:29.901 "rw_mbytes_per_sec": 0, 00:09:29.901 "r_mbytes_per_sec": 0, 00:09:29.901 "w_mbytes_per_sec": 0 00:09:29.901 }, 00:09:29.901 "claimed": true, 00:09:29.901 "claim_type": "exclusive_write", 00:09:29.901 "zoned": false, 00:09:29.901 "supported_io_types": { 00:09:29.901 "read": true, 00:09:29.901 "write": true, 00:09:29.901 "unmap": true, 00:09:29.901 "flush": true, 00:09:29.902 "reset": true, 00:09:29.902 "nvme_admin": false, 00:09:29.902 "nvme_io": false, 00:09:29.902 "nvme_io_md": false, 00:09:29.902 "write_zeroes": true, 00:09:29.902 "zcopy": true, 00:09:29.902 "get_zone_info": false, 00:09:29.902 "zone_management": false, 00:09:29.902 "zone_append": false, 00:09:29.902 "compare": false, 00:09:29.902 "compare_and_write": false, 00:09:29.902 "abort": true, 00:09:29.902 "seek_hole": false, 00:09:29.902 "seek_data": false, 00:09:29.902 "copy": true, 00:09:29.902 "nvme_iov_md": false 00:09:29.902 }, 00:09:29.902 "memory_domains": [ 00:09:29.902 { 00:09:29.902 "dma_device_id": "system", 00:09:29.902 "dma_device_type": 1 00:09:29.902 }, 00:09:29.902 { 00:09:29.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.902 "dma_device_type": 2 00:09:29.902 } 00:09:29.902 ], 00:09:29.902 "driver_specific": {} 00:09:29.902 } 00:09:29.902 ] 00:09:29.902 06:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:09:29.902 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:29.902 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:29.902 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:29.902 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:29.902 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:29.902 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:29.902 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:29.902 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:29.902 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:29.902 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:29.902 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:29.902 06:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.161 06:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:30.161 "name": "Existed_Raid", 00:09:30.161 "uuid": "67011539-7e01-47fa-814c-d0e061409b8d", 00:09:30.161 "strip_size_kb": 64, 00:09:30.161 "state": "online", 00:09:30.161 "raid_level": "raid0", 00:09:30.161 "superblock": false, 00:09:30.161 "num_base_bdevs": 3, 00:09:30.161 "num_base_bdevs_discovered": 3, 00:09:30.161 "num_base_bdevs_operational": 3, 00:09:30.161 "base_bdevs_list": [ 00:09:30.161 { 00:09:30.161 "name": "NewBaseBdev", 00:09:30.161 "uuid": "fac477ff-4f36-4418-97b7-09e761733777", 00:09:30.161 "is_configured": true, 00:09:30.161 "data_offset": 0, 00:09:30.161 "data_size": 65536 00:09:30.161 }, 00:09:30.161 { 00:09:30.161 "name": "BaseBdev2", 00:09:30.161 "uuid": "a1aec7b7-be42-4f76-80fa-9ed0a6c750ac", 00:09:30.161 "is_configured": true, 00:09:30.161 "data_offset": 0, 00:09:30.161 "data_size": 65536 00:09:30.161 }, 00:09:30.161 { 00:09:30.161 "name": "BaseBdev3", 00:09:30.161 "uuid": "26e01973-ad29-4859-8e6e-b8f2646394c1", 00:09:30.161 "is_configured": true, 00:09:30.161 "data_offset": 0, 00:09:30.161 "data_size": 65536 00:09:30.161 } 00:09:30.161 ] 00:09:30.161 }' 00:09:30.161 06:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:30.161 06:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.729 06:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:09:30.729 06:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:30.729 06:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:30.729 06:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:30.729 06:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:30.729 06:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:30.729 06:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:30.729 06:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:30.729 [2024-08-14 06:40:57.961895] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.988 06:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:30.988 "name": "Existed_Raid", 00:09:30.988 "aliases": [ 00:09:30.988 "67011539-7e01-47fa-814c-d0e061409b8d" 00:09:30.988 ], 00:09:30.988 "product_name": "Raid Volume", 00:09:30.988 "block_size": 512, 00:09:30.988 "num_blocks": 196608, 00:09:30.988 "uuid": "67011539-7e01-47fa-814c-d0e061409b8d", 00:09:30.988 "assigned_rate_limits": { 00:09:30.988 "rw_ios_per_sec": 0, 00:09:30.988 "rw_mbytes_per_sec": 0, 00:09:30.988 "r_mbytes_per_sec": 0, 00:09:30.988 "w_mbytes_per_sec": 0 00:09:30.988 }, 00:09:30.988 "claimed": false, 00:09:30.988 "zoned": false, 00:09:30.988 "supported_io_types": { 00:09:30.988 "read": true, 00:09:30.988 "write": true, 00:09:30.988 "unmap": true, 00:09:30.988 "flush": true, 00:09:30.988 "reset": true, 00:09:30.989 "nvme_admin": false, 00:09:30.989 "nvme_io": false, 00:09:30.989 "nvme_io_md": false, 00:09:30.989 "write_zeroes": true, 00:09:30.989 "zcopy": false, 00:09:30.989 "get_zone_info": false, 00:09:30.989 "zone_management": false, 00:09:30.989 "zone_append": false, 00:09:30.989 "compare": false, 00:09:30.989 "compare_and_write": false, 00:09:30.989 "abort": false, 00:09:30.989 "seek_hole": false, 00:09:30.989 "seek_data": false, 00:09:30.989 "copy": false, 00:09:30.989 "nvme_iov_md": false 00:09:30.989 }, 00:09:30.989 "memory_domains": [ 00:09:30.989 { 00:09:30.989 "dma_device_id": "system", 00:09:30.989 "dma_device_type": 1 00:09:30.989 }, 00:09:30.989 { 00:09:30.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.989 "dma_device_type": 2 00:09:30.989 }, 00:09:30.989 { 00:09:30.989 "dma_device_id": "system", 00:09:30.989 "dma_device_type": 1 00:09:30.989 }, 00:09:30.989 { 00:09:30.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.989 "dma_device_type": 2 00:09:30.989 }, 00:09:30.989 { 00:09:30.989 "dma_device_id": "system", 00:09:30.989 "dma_device_type": 1 00:09:30.989 }, 00:09:30.989 { 00:09:30.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.989 "dma_device_type": 2 00:09:30.989 } 00:09:30.989 ], 00:09:30.989 "driver_specific": { 00:09:30.989 "raid": { 00:09:30.989 "uuid": "67011539-7e01-47fa-814c-d0e061409b8d", 00:09:30.989 "strip_size_kb": 64, 00:09:30.989 "state": "online", 00:09:30.989 "raid_level": "raid0", 00:09:30.989 "superblock": false, 00:09:30.989 "num_base_bdevs": 3, 00:09:30.989 "num_base_bdevs_discovered": 3, 00:09:30.989 "num_base_bdevs_operational": 3, 00:09:30.989 "base_bdevs_list": [ 00:09:30.989 { 00:09:30.989 "name": "NewBaseBdev", 00:09:30.989 "uuid": "fac477ff-4f36-4418-97b7-09e761733777", 00:09:30.989 "is_configured": true, 00:09:30.989 "data_offset": 0, 00:09:30.989 "data_size": 65536 00:09:30.989 }, 00:09:30.989 { 00:09:30.989 "name": "BaseBdev2", 00:09:30.989 "uuid": "a1aec7b7-be42-4f76-80fa-9ed0a6c750ac", 00:09:30.989 "is_configured": true, 00:09:30.989 "data_offset": 0, 00:09:30.989 "data_size": 65536 00:09:30.989 }, 00:09:30.989 { 00:09:30.989 "name": "BaseBdev3", 00:09:30.989 "uuid": "26e01973-ad29-4859-8e6e-b8f2646394c1", 00:09:30.989 "is_configured": true, 00:09:30.989 "data_offset": 0, 00:09:30.989 "data_size": 65536 00:09:30.989 } 00:09:30.989 ] 00:09:30.989 } 00:09:30.989 } 00:09:30.989 }' 00:09:30.989 06:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.989 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:09:30.989 BaseBdev2 00:09:30.989 BaseBdev3' 00:09:30.989 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:30.989 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:30.989 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:09:31.248 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:31.248 "name": "NewBaseBdev", 00:09:31.248 "aliases": [ 00:09:31.248 "fac477ff-4f36-4418-97b7-09e761733777" 00:09:31.248 ], 00:09:31.248 "product_name": "Malloc disk", 00:09:31.248 "block_size": 512, 00:09:31.248 "num_blocks": 65536, 00:09:31.248 "uuid": "fac477ff-4f36-4418-97b7-09e761733777", 00:09:31.248 "assigned_rate_limits": { 00:09:31.248 "rw_ios_per_sec": 0, 00:09:31.248 "rw_mbytes_per_sec": 0, 00:09:31.248 "r_mbytes_per_sec": 0, 00:09:31.248 "w_mbytes_per_sec": 0 00:09:31.248 }, 00:09:31.248 "claimed": true, 00:09:31.248 "claim_type": "exclusive_write", 00:09:31.248 "zoned": false, 00:09:31.248 "supported_io_types": { 00:09:31.248 "read": true, 00:09:31.248 "write": true, 00:09:31.248 "unmap": true, 00:09:31.248 "flush": true, 00:09:31.248 "reset": true, 00:09:31.248 "nvme_admin": false, 00:09:31.248 "nvme_io": false, 00:09:31.248 "nvme_io_md": false, 00:09:31.248 "write_zeroes": true, 00:09:31.248 "zcopy": true, 00:09:31.248 "get_zone_info": false, 00:09:31.248 "zone_management": false, 00:09:31.248 "zone_append": false, 00:09:31.248 "compare": false, 00:09:31.248 "compare_and_write": false, 00:09:31.248 "abort": true, 00:09:31.248 "seek_hole": false, 00:09:31.248 "seek_data": false, 00:09:31.248 "copy": true, 00:09:31.248 "nvme_iov_md": false 00:09:31.248 }, 00:09:31.248 "memory_domains": [ 00:09:31.248 { 00:09:31.248 "dma_device_id": "system", 00:09:31.248 "dma_device_type": 1 00:09:31.248 }, 00:09:31.248 { 00:09:31.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.248 "dma_device_type": 2 00:09:31.248 } 00:09:31.248 ], 00:09:31.248 "driver_specific": {} 00:09:31.248 }' 00:09:31.248 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:31.248 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:31.248 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:31.248 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:31.248 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:31.248 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:31.248 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:31.249 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:31.249 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:31.507 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:31.507 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:31.507 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:31.507 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:31.507 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:31.507 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:31.765 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:31.765 "name": "BaseBdev2", 00:09:31.765 "aliases": [ 00:09:31.765 "a1aec7b7-be42-4f76-80fa-9ed0a6c750ac" 00:09:31.765 ], 00:09:31.765 "product_name": "Malloc disk", 00:09:31.765 "block_size": 512, 00:09:31.765 "num_blocks": 65536, 00:09:31.765 "uuid": "a1aec7b7-be42-4f76-80fa-9ed0a6c750ac", 00:09:31.765 "assigned_rate_limits": { 00:09:31.765 "rw_ios_per_sec": 0, 00:09:31.765 "rw_mbytes_per_sec": 0, 00:09:31.765 "r_mbytes_per_sec": 0, 00:09:31.765 "w_mbytes_per_sec": 0 00:09:31.765 }, 00:09:31.765 "claimed": true, 00:09:31.765 "claim_type": "exclusive_write", 00:09:31.765 "zoned": false, 00:09:31.765 "supported_io_types": { 00:09:31.765 "read": true, 00:09:31.765 "write": true, 00:09:31.765 "unmap": true, 00:09:31.765 "flush": true, 00:09:31.765 "reset": true, 00:09:31.765 "nvme_admin": false, 00:09:31.765 "nvme_io": false, 00:09:31.765 "nvme_io_md": false, 00:09:31.765 "write_zeroes": true, 00:09:31.765 "zcopy": true, 00:09:31.765 "get_zone_info": false, 00:09:31.765 "zone_management": false, 00:09:31.765 "zone_append": false, 00:09:31.765 "compare": false, 00:09:31.765 "compare_and_write": false, 00:09:31.765 "abort": true, 00:09:31.765 "seek_hole": false, 00:09:31.765 "seek_data": false, 00:09:31.765 "copy": true, 00:09:31.765 "nvme_iov_md": false 00:09:31.765 }, 00:09:31.765 "memory_domains": [ 00:09:31.765 { 00:09:31.765 "dma_device_id": "system", 00:09:31.765 "dma_device_type": 1 00:09:31.765 }, 00:09:31.765 { 00:09:31.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.765 "dma_device_type": 2 00:09:31.765 } 00:09:31.765 ], 00:09:31.765 "driver_specific": {} 00:09:31.766 }' 00:09:31.766 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:31.766 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:31.766 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:31.766 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:31.766 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:31.766 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:31.766 06:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:32.024 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:32.024 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:32.024 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:32.024 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:32.024 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:32.024 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:32.024 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:32.024 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:32.284 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:32.284 "name": "BaseBdev3", 00:09:32.284 "aliases": [ 00:09:32.284 "26e01973-ad29-4859-8e6e-b8f2646394c1" 00:09:32.284 ], 00:09:32.284 "product_name": "Malloc disk", 00:09:32.284 "block_size": 512, 00:09:32.284 "num_blocks": 65536, 00:09:32.284 "uuid": "26e01973-ad29-4859-8e6e-b8f2646394c1", 00:09:32.284 "assigned_rate_limits": { 00:09:32.284 "rw_ios_per_sec": 0, 00:09:32.284 "rw_mbytes_per_sec": 0, 00:09:32.284 "r_mbytes_per_sec": 0, 00:09:32.284 "w_mbytes_per_sec": 0 00:09:32.284 }, 00:09:32.284 "claimed": true, 00:09:32.284 "claim_type": "exclusive_write", 00:09:32.284 "zoned": false, 00:09:32.284 "supported_io_types": { 00:09:32.284 "read": true, 00:09:32.284 "write": true, 00:09:32.284 "unmap": true, 00:09:32.284 "flush": true, 00:09:32.284 "reset": true, 00:09:32.284 "nvme_admin": false, 00:09:32.284 "nvme_io": false, 00:09:32.284 "nvme_io_md": false, 00:09:32.284 "write_zeroes": true, 00:09:32.284 "zcopy": true, 00:09:32.284 "get_zone_info": false, 00:09:32.284 "zone_management": false, 00:09:32.284 "zone_append": false, 00:09:32.284 "compare": false, 00:09:32.284 "compare_and_write": false, 00:09:32.284 "abort": true, 00:09:32.284 "seek_hole": false, 00:09:32.284 "seek_data": false, 00:09:32.284 "copy": true, 00:09:32.284 "nvme_iov_md": false 00:09:32.284 }, 00:09:32.284 "memory_domains": [ 00:09:32.284 { 00:09:32.284 "dma_device_id": "system", 00:09:32.284 "dma_device_type": 1 00:09:32.284 }, 00:09:32.284 { 00:09:32.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.284 "dma_device_type": 2 00:09:32.284 } 00:09:32.284 ], 00:09:32.284 "driver_specific": {} 00:09:32.284 }' 00:09:32.284 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:32.284 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:32.284 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:32.284 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:32.284 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:32.548 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:32.548 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:32.548 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:32.548 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:32.548 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:32.548 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:32.548 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:32.548 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:32.807 [2024-08-14 06:40:59.940608] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.807 [2024-08-14 06:40:59.940743] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.807 [2024-08-14 06:40:59.940852] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.807 [2024-08-14 06:40:59.940918] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.807 [2024-08-14 06:40:59.940930] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:32.807 06:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 75376 00:09:32.807 06:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 75376 ']' 00:09:32.807 06:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 75376 00:09:32.807 06:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:09:32.807 06:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:32.807 06:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75376 00:09:32.807 06:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:32.807 06:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:32.807 06:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75376' 00:09:32.807 killing process with pid 75376 00:09:32.807 06:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 75376 00:09:32.807 06:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 75376 00:09:32.808 [2024-08-14 06:41:00.016432] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:32.808 [2024-08-14 06:41:00.048940] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.067 06:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:33.067 00:09:33.067 real 0m30.564s 00:09:33.067 user 0m56.701s 00:09:33.067 sys 0m4.632s 00:09:33.067 06:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:33.067 06:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.067 ************************************ 00:09:33.067 END TEST raid_state_function_test 00:09:33.067 ************************************ 00:09:33.326 06:41:00 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:33.326 06:41:00 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:33.326 06:41:00 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:33.326 06:41:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.326 ************************************ 00:09:33.326 START TEST raid_state_function_test_sb 00:09:33.326 ************************************ 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 true 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:33.326 Process raid pid: 76336 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=76336 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 76336' 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 76336 /var/tmp/spdk-raid.sock 00:09:33.326 06:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 76336 ']' 00:09:33.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:33.327 06:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:33.327 06:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:33.327 06:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:33.327 06:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:33.327 06:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.327 [2024-08-14 06:41:00.466648] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:09:33.327 [2024-08-14 06:41:00.466800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.586 [2024-08-14 06:41:00.615277] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.586 [2024-08-14 06:41:00.668425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.586 [2024-08-14 06:41:00.712204] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.586 [2024-08-14 06:41:00.712255] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.155 06:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:34.155 06:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:09:34.155 06:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:34.415 [2024-08-14 06:41:01.517031] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.415 [2024-08-14 06:41:01.517209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.415 [2024-08-14 06:41:01.517234] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.415 [2024-08-14 06:41:01.517261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.415 [2024-08-14 06:41:01.517275] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.415 [2024-08-14 06:41:01.517283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.415 06:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:34.415 06:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:34.415 06:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:34.415 06:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:34.415 06:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:34.415 06:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:34.415 06:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:34.415 06:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:34.415 06:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:34.415 06:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:34.415 06:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.415 06:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.675 06:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:34.675 "name": "Existed_Raid", 00:09:34.675 "uuid": "de78d835-17d4-4f05-aee4-e712a7eba5b1", 00:09:34.675 "strip_size_kb": 64, 00:09:34.675 "state": "configuring", 00:09:34.675 "raid_level": "raid0", 00:09:34.675 "superblock": true, 00:09:34.675 "num_base_bdevs": 3, 00:09:34.675 "num_base_bdevs_discovered": 0, 00:09:34.675 "num_base_bdevs_operational": 3, 00:09:34.675 "base_bdevs_list": [ 00:09:34.675 { 00:09:34.675 "name": "BaseBdev1", 00:09:34.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.675 "is_configured": false, 00:09:34.675 "data_offset": 0, 00:09:34.675 "data_size": 0 00:09:34.675 }, 00:09:34.675 { 00:09:34.675 "name": "BaseBdev2", 00:09:34.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.675 "is_configured": false, 00:09:34.675 "data_offset": 0, 00:09:34.675 "data_size": 0 00:09:34.675 }, 00:09:34.675 { 00:09:34.675 "name": "BaseBdev3", 00:09:34.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.675 "is_configured": false, 00:09:34.675 "data_offset": 0, 00:09:34.675 "data_size": 0 00:09:34.675 } 00:09:34.675 ] 00:09:34.675 }' 00:09:34.675 06:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:34.675 06:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.245 06:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:35.504 [2024-08-14 06:41:02.503343] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.504 [2024-08-14 06:41:02.503466] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:35.504 06:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:35.504 [2024-08-14 06:41:02.710995] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.504 [2024-08-14 06:41:02.711153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.504 [2024-08-14 06:41:02.711218] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.504 [2024-08-14 06:41:02.711247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.504 [2024-08-14 06:41:02.711268] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.504 [2024-08-14 06:41:02.711307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.504 06:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:35.763 [2024-08-14 06:41:02.915630] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.763 BaseBdev1 00:09:35.763 06:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:35.763 06:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:09:35.763 06:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:35.763 06:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:35.764 06:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:35.764 06:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:35.764 06:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:36.024 06:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:36.283 [ 00:09:36.283 { 00:09:36.283 "name": "BaseBdev1", 00:09:36.283 "aliases": [ 00:09:36.283 "4fc11bfd-2a75-4c94-952f-aca885e9512b" 00:09:36.283 ], 00:09:36.283 "product_name": "Malloc disk", 00:09:36.283 "block_size": 512, 00:09:36.283 "num_blocks": 65536, 00:09:36.283 "uuid": "4fc11bfd-2a75-4c94-952f-aca885e9512b", 00:09:36.283 "assigned_rate_limits": { 00:09:36.283 "rw_ios_per_sec": 0, 00:09:36.283 "rw_mbytes_per_sec": 0, 00:09:36.283 "r_mbytes_per_sec": 0, 00:09:36.283 "w_mbytes_per_sec": 0 00:09:36.283 }, 00:09:36.283 "claimed": true, 00:09:36.283 "claim_type": "exclusive_write", 00:09:36.283 "zoned": false, 00:09:36.283 "supported_io_types": { 00:09:36.283 "read": true, 00:09:36.283 "write": true, 00:09:36.283 "unmap": true, 00:09:36.283 "flush": true, 00:09:36.283 "reset": true, 00:09:36.283 "nvme_admin": false, 00:09:36.283 "nvme_io": false, 00:09:36.283 "nvme_io_md": false, 00:09:36.283 "write_zeroes": true, 00:09:36.283 "zcopy": true, 00:09:36.283 "get_zone_info": false, 00:09:36.283 "zone_management": false, 00:09:36.283 "zone_append": false, 00:09:36.283 "compare": false, 00:09:36.283 "compare_and_write": false, 00:09:36.283 "abort": true, 00:09:36.283 "seek_hole": false, 00:09:36.283 "seek_data": false, 00:09:36.283 "copy": true, 00:09:36.283 "nvme_iov_md": false 00:09:36.283 }, 00:09:36.283 "memory_domains": [ 00:09:36.283 { 00:09:36.283 "dma_device_id": "system", 00:09:36.283 "dma_device_type": 1 00:09:36.283 }, 00:09:36.283 { 00:09:36.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.283 "dma_device_type": 2 00:09:36.283 } 00:09:36.283 ], 00:09:36.283 "driver_specific": {} 00:09:36.283 } 00:09:36.283 ] 00:09:36.283 06:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:36.283 06:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:36.283 06:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:36.283 06:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:36.283 06:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:36.283 06:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:36.283 06:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:36.283 06:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:36.283 06:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:36.283 06:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:36.283 06:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:36.283 06:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.283 06:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.542 06:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:36.542 "name": "Existed_Raid", 00:09:36.542 "uuid": "4c1985ef-040f-49cc-9a3e-af7660bbb020", 00:09:36.542 "strip_size_kb": 64, 00:09:36.542 "state": "configuring", 00:09:36.542 "raid_level": "raid0", 00:09:36.542 "superblock": true, 00:09:36.542 "num_base_bdevs": 3, 00:09:36.542 "num_base_bdevs_discovered": 1, 00:09:36.542 "num_base_bdevs_operational": 3, 00:09:36.542 "base_bdevs_list": [ 00:09:36.542 { 00:09:36.542 "name": "BaseBdev1", 00:09:36.542 "uuid": "4fc11bfd-2a75-4c94-952f-aca885e9512b", 00:09:36.542 "is_configured": true, 00:09:36.542 "data_offset": 2048, 00:09:36.542 "data_size": 63488 00:09:36.542 }, 00:09:36.542 { 00:09:36.542 "name": "BaseBdev2", 00:09:36.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.542 "is_configured": false, 00:09:36.542 "data_offset": 0, 00:09:36.542 "data_size": 0 00:09:36.542 }, 00:09:36.542 { 00:09:36.542 "name": "BaseBdev3", 00:09:36.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.542 "is_configured": false, 00:09:36.542 "data_offset": 0, 00:09:36.542 "data_size": 0 00:09:36.542 } 00:09:36.542 ] 00:09:36.542 }' 00:09:36.542 06:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:36.542 06:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.109 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:37.109 [2024-08-14 06:41:04.293326] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.109 [2024-08-14 06:41:04.293477] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:37.109 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:37.367 [2024-08-14 06:41:04.497061] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.367 [2024-08-14 06:41:04.499025] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.368 [2024-08-14 06:41:04.499122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.368 [2024-08-14 06:41:04.499184] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:37.368 [2024-08-14 06:41:04.499231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:37.368 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:37.368 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:37.368 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:37.368 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:37.368 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:37.368 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:37.368 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:37.368 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:37.368 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:37.368 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:37.368 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:37.368 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:37.368 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:37.368 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.627 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:37.627 "name": "Existed_Raid", 00:09:37.627 "uuid": "7cdd89ae-f3e1-45c8-b366-a9ee693a7f57", 00:09:37.627 "strip_size_kb": 64, 00:09:37.627 "state": "configuring", 00:09:37.627 "raid_level": "raid0", 00:09:37.627 "superblock": true, 00:09:37.627 "num_base_bdevs": 3, 00:09:37.627 "num_base_bdevs_discovered": 1, 00:09:37.627 "num_base_bdevs_operational": 3, 00:09:37.627 "base_bdevs_list": [ 00:09:37.627 { 00:09:37.627 "name": "BaseBdev1", 00:09:37.627 "uuid": "4fc11bfd-2a75-4c94-952f-aca885e9512b", 00:09:37.627 "is_configured": true, 00:09:37.627 "data_offset": 2048, 00:09:37.627 "data_size": 63488 00:09:37.627 }, 00:09:37.627 { 00:09:37.627 "name": "BaseBdev2", 00:09:37.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.627 "is_configured": false, 00:09:37.627 "data_offset": 0, 00:09:37.627 "data_size": 0 00:09:37.627 }, 00:09:37.627 { 00:09:37.627 "name": "BaseBdev3", 00:09:37.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.627 "is_configured": false, 00:09:37.627 "data_offset": 0, 00:09:37.627 "data_size": 0 00:09:37.627 } 00:09:37.627 ] 00:09:37.627 }' 00:09:37.627 06:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:37.627 06:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.251 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:38.509 [2024-08-14 06:41:05.541649] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.509 BaseBdev2 00:09:38.509 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:38.509 06:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:09:38.509 06:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:38.509 06:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:38.509 06:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:38.509 06:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:38.509 06:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:38.769 06:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:38.769 [ 00:09:38.769 { 00:09:38.769 "name": "BaseBdev2", 00:09:38.769 "aliases": [ 00:09:38.769 "7e729aae-7030-4cfc-b3d5-b5c82d943923" 00:09:38.769 ], 00:09:38.769 "product_name": "Malloc disk", 00:09:38.769 "block_size": 512, 00:09:38.769 "num_blocks": 65536, 00:09:38.769 "uuid": "7e729aae-7030-4cfc-b3d5-b5c82d943923", 00:09:38.769 "assigned_rate_limits": { 00:09:38.769 "rw_ios_per_sec": 0, 00:09:38.769 "rw_mbytes_per_sec": 0, 00:09:38.769 "r_mbytes_per_sec": 0, 00:09:38.769 "w_mbytes_per_sec": 0 00:09:38.769 }, 00:09:38.769 "claimed": true, 00:09:38.769 "claim_type": "exclusive_write", 00:09:38.769 "zoned": false, 00:09:38.769 "supported_io_types": { 00:09:38.769 "read": true, 00:09:38.769 "write": true, 00:09:38.769 "unmap": true, 00:09:38.769 "flush": true, 00:09:38.769 "reset": true, 00:09:38.769 "nvme_admin": false, 00:09:38.769 "nvme_io": false, 00:09:38.769 "nvme_io_md": false, 00:09:38.769 "write_zeroes": true, 00:09:38.769 "zcopy": true, 00:09:38.769 "get_zone_info": false, 00:09:38.769 "zone_management": false, 00:09:38.769 "zone_append": false, 00:09:38.769 "compare": false, 00:09:38.769 "compare_and_write": false, 00:09:38.769 "abort": true, 00:09:38.769 "seek_hole": false, 00:09:38.769 "seek_data": false, 00:09:38.769 "copy": true, 00:09:38.769 "nvme_iov_md": false 00:09:38.769 }, 00:09:38.769 "memory_domains": [ 00:09:38.769 { 00:09:38.769 "dma_device_id": "system", 00:09:38.769 "dma_device_type": 1 00:09:38.769 }, 00:09:38.769 { 00:09:38.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.769 "dma_device_type": 2 00:09:38.769 } 00:09:38.769 ], 00:09:38.769 "driver_specific": {} 00:09:38.769 } 00:09:38.769 ] 00:09:38.769 06:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:38.769 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:38.769 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:38.769 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:38.769 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:38.769 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:38.769 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:38.769 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:38.769 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:38.770 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:38.770 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:38.770 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:38.770 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:38.770 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:38.770 06:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.029 06:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:39.029 "name": "Existed_Raid", 00:09:39.029 "uuid": "7cdd89ae-f3e1-45c8-b366-a9ee693a7f57", 00:09:39.029 "strip_size_kb": 64, 00:09:39.029 "state": "configuring", 00:09:39.029 "raid_level": "raid0", 00:09:39.029 "superblock": true, 00:09:39.029 "num_base_bdevs": 3, 00:09:39.029 "num_base_bdevs_discovered": 2, 00:09:39.029 "num_base_bdevs_operational": 3, 00:09:39.029 "base_bdevs_list": [ 00:09:39.029 { 00:09:39.029 "name": "BaseBdev1", 00:09:39.029 "uuid": "4fc11bfd-2a75-4c94-952f-aca885e9512b", 00:09:39.029 "is_configured": true, 00:09:39.029 "data_offset": 2048, 00:09:39.029 "data_size": 63488 00:09:39.029 }, 00:09:39.029 { 00:09:39.029 "name": "BaseBdev2", 00:09:39.029 "uuid": "7e729aae-7030-4cfc-b3d5-b5c82d943923", 00:09:39.029 "is_configured": true, 00:09:39.029 "data_offset": 2048, 00:09:39.029 "data_size": 63488 00:09:39.029 }, 00:09:39.029 { 00:09:39.029 "name": "BaseBdev3", 00:09:39.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.029 "is_configured": false, 00:09:39.029 "data_offset": 0, 00:09:39.029 "data_size": 0 00:09:39.029 } 00:09:39.029 ] 00:09:39.029 }' 00:09:39.029 06:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:39.029 06:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.598 06:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:39.858 [2024-08-14 06:41:06.918533] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.858 [2024-08-14 06:41:06.918813] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:39.858 [2024-08-14 06:41:06.918833] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:39.858 [2024-08-14 06:41:06.919127] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:39.858 [2024-08-14 06:41:06.919275] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:39.858 [2024-08-14 06:41:06.919289] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:39.858 [2024-08-14 06:41:06.919420] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.858 BaseBdev3 00:09:39.858 06:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:39.858 06:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:09:39.858 06:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:39.858 06:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:39.858 06:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:39.858 06:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:39.858 06:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:40.117 06:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:40.117 [ 00:09:40.117 { 00:09:40.117 "name": "BaseBdev3", 00:09:40.117 "aliases": [ 00:09:40.117 "3671529b-5dbe-4f4f-827a-6aed55f68416" 00:09:40.117 ], 00:09:40.117 "product_name": "Malloc disk", 00:09:40.117 "block_size": 512, 00:09:40.117 "num_blocks": 65536, 00:09:40.117 "uuid": "3671529b-5dbe-4f4f-827a-6aed55f68416", 00:09:40.117 "assigned_rate_limits": { 00:09:40.117 "rw_ios_per_sec": 0, 00:09:40.117 "rw_mbytes_per_sec": 0, 00:09:40.118 "r_mbytes_per_sec": 0, 00:09:40.118 "w_mbytes_per_sec": 0 00:09:40.118 }, 00:09:40.118 "claimed": true, 00:09:40.118 "claim_type": "exclusive_write", 00:09:40.118 "zoned": false, 00:09:40.118 "supported_io_types": { 00:09:40.118 "read": true, 00:09:40.118 "write": true, 00:09:40.118 "unmap": true, 00:09:40.118 "flush": true, 00:09:40.118 "reset": true, 00:09:40.118 "nvme_admin": false, 00:09:40.118 "nvme_io": false, 00:09:40.118 "nvme_io_md": false, 00:09:40.118 "write_zeroes": true, 00:09:40.118 "zcopy": true, 00:09:40.118 "get_zone_info": false, 00:09:40.118 "zone_management": false, 00:09:40.118 "zone_append": false, 00:09:40.118 "compare": false, 00:09:40.118 "compare_and_write": false, 00:09:40.118 "abort": true, 00:09:40.118 "seek_hole": false, 00:09:40.118 "seek_data": false, 00:09:40.118 "copy": true, 00:09:40.118 "nvme_iov_md": false 00:09:40.118 }, 00:09:40.118 "memory_domains": [ 00:09:40.118 { 00:09:40.118 "dma_device_id": "system", 00:09:40.118 "dma_device_type": 1 00:09:40.118 }, 00:09:40.118 { 00:09:40.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.118 "dma_device_type": 2 00:09:40.118 } 00:09:40.118 ], 00:09:40.118 "driver_specific": {} 00:09:40.118 } 00:09:40.118 ] 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.377 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:40.377 "name": "Existed_Raid", 00:09:40.377 "uuid": "7cdd89ae-f3e1-45c8-b366-a9ee693a7f57", 00:09:40.377 "strip_size_kb": 64, 00:09:40.377 "state": "online", 00:09:40.377 "raid_level": "raid0", 00:09:40.377 "superblock": true, 00:09:40.377 "num_base_bdevs": 3, 00:09:40.377 "num_base_bdevs_discovered": 3, 00:09:40.377 "num_base_bdevs_operational": 3, 00:09:40.377 "base_bdevs_list": [ 00:09:40.377 { 00:09:40.377 "name": "BaseBdev1", 00:09:40.377 "uuid": "4fc11bfd-2a75-4c94-952f-aca885e9512b", 00:09:40.377 "is_configured": true, 00:09:40.377 "data_offset": 2048, 00:09:40.377 "data_size": 63488 00:09:40.377 }, 00:09:40.377 { 00:09:40.377 "name": "BaseBdev2", 00:09:40.377 "uuid": "7e729aae-7030-4cfc-b3d5-b5c82d943923", 00:09:40.377 "is_configured": true, 00:09:40.377 "data_offset": 2048, 00:09:40.377 "data_size": 63488 00:09:40.377 }, 00:09:40.377 { 00:09:40.377 "name": "BaseBdev3", 00:09:40.377 "uuid": "3671529b-5dbe-4f4f-827a-6aed55f68416", 00:09:40.378 "is_configured": true, 00:09:40.378 "data_offset": 2048, 00:09:40.378 "data_size": 63488 00:09:40.378 } 00:09:40.378 ] 00:09:40.378 }' 00:09:40.378 06:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:40.378 06:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.946 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:40.946 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:40.946 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:40.946 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:40.946 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:40.946 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:40.946 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:40.946 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:41.206 [2024-08-14 06:41:08.344558] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.206 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:41.206 "name": "Existed_Raid", 00:09:41.206 "aliases": [ 00:09:41.206 "7cdd89ae-f3e1-45c8-b366-a9ee693a7f57" 00:09:41.206 ], 00:09:41.206 "product_name": "Raid Volume", 00:09:41.206 "block_size": 512, 00:09:41.206 "num_blocks": 190464, 00:09:41.206 "uuid": "7cdd89ae-f3e1-45c8-b366-a9ee693a7f57", 00:09:41.206 "assigned_rate_limits": { 00:09:41.206 "rw_ios_per_sec": 0, 00:09:41.206 "rw_mbytes_per_sec": 0, 00:09:41.206 "r_mbytes_per_sec": 0, 00:09:41.206 "w_mbytes_per_sec": 0 00:09:41.206 }, 00:09:41.206 "claimed": false, 00:09:41.206 "zoned": false, 00:09:41.206 "supported_io_types": { 00:09:41.206 "read": true, 00:09:41.206 "write": true, 00:09:41.206 "unmap": true, 00:09:41.206 "flush": true, 00:09:41.206 "reset": true, 00:09:41.206 "nvme_admin": false, 00:09:41.206 "nvme_io": false, 00:09:41.206 "nvme_io_md": false, 00:09:41.206 "write_zeroes": true, 00:09:41.206 "zcopy": false, 00:09:41.206 "get_zone_info": false, 00:09:41.206 "zone_management": false, 00:09:41.206 "zone_append": false, 00:09:41.206 "compare": false, 00:09:41.206 "compare_and_write": false, 00:09:41.206 "abort": false, 00:09:41.206 "seek_hole": false, 00:09:41.206 "seek_data": false, 00:09:41.206 "copy": false, 00:09:41.206 "nvme_iov_md": false 00:09:41.206 }, 00:09:41.206 "memory_domains": [ 00:09:41.206 { 00:09:41.206 "dma_device_id": "system", 00:09:41.206 "dma_device_type": 1 00:09:41.206 }, 00:09:41.206 { 00:09:41.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.206 "dma_device_type": 2 00:09:41.206 }, 00:09:41.206 { 00:09:41.206 "dma_device_id": "system", 00:09:41.206 "dma_device_type": 1 00:09:41.206 }, 00:09:41.206 { 00:09:41.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.206 "dma_device_type": 2 00:09:41.206 }, 00:09:41.206 { 00:09:41.206 "dma_device_id": "system", 00:09:41.206 "dma_device_type": 1 00:09:41.206 }, 00:09:41.206 { 00:09:41.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.206 "dma_device_type": 2 00:09:41.206 } 00:09:41.206 ], 00:09:41.206 "driver_specific": { 00:09:41.206 "raid": { 00:09:41.206 "uuid": "7cdd89ae-f3e1-45c8-b366-a9ee693a7f57", 00:09:41.206 "strip_size_kb": 64, 00:09:41.206 "state": "online", 00:09:41.206 "raid_level": "raid0", 00:09:41.206 "superblock": true, 00:09:41.206 "num_base_bdevs": 3, 00:09:41.206 "num_base_bdevs_discovered": 3, 00:09:41.206 "num_base_bdevs_operational": 3, 00:09:41.206 "base_bdevs_list": [ 00:09:41.206 { 00:09:41.206 "name": "BaseBdev1", 00:09:41.206 "uuid": "4fc11bfd-2a75-4c94-952f-aca885e9512b", 00:09:41.206 "is_configured": true, 00:09:41.206 "data_offset": 2048, 00:09:41.206 "data_size": 63488 00:09:41.206 }, 00:09:41.206 { 00:09:41.206 "name": "BaseBdev2", 00:09:41.206 "uuid": "7e729aae-7030-4cfc-b3d5-b5c82d943923", 00:09:41.206 "is_configured": true, 00:09:41.206 "data_offset": 2048, 00:09:41.206 "data_size": 63488 00:09:41.206 }, 00:09:41.206 { 00:09:41.206 "name": "BaseBdev3", 00:09:41.206 "uuid": "3671529b-5dbe-4f4f-827a-6aed55f68416", 00:09:41.206 "is_configured": true, 00:09:41.206 "data_offset": 2048, 00:09:41.206 "data_size": 63488 00:09:41.206 } 00:09:41.206 ] 00:09:41.206 } 00:09:41.206 } 00:09:41.206 }' 00:09:41.206 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.206 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:41.206 BaseBdev2 00:09:41.206 BaseBdev3' 00:09:41.206 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:41.206 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:41.206 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:41.465 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:41.465 "name": "BaseBdev1", 00:09:41.465 "aliases": [ 00:09:41.465 "4fc11bfd-2a75-4c94-952f-aca885e9512b" 00:09:41.465 ], 00:09:41.465 "product_name": "Malloc disk", 00:09:41.465 "block_size": 512, 00:09:41.465 "num_blocks": 65536, 00:09:41.465 "uuid": "4fc11bfd-2a75-4c94-952f-aca885e9512b", 00:09:41.465 "assigned_rate_limits": { 00:09:41.465 "rw_ios_per_sec": 0, 00:09:41.465 "rw_mbytes_per_sec": 0, 00:09:41.465 "r_mbytes_per_sec": 0, 00:09:41.465 "w_mbytes_per_sec": 0 00:09:41.465 }, 00:09:41.465 "claimed": true, 00:09:41.465 "claim_type": "exclusive_write", 00:09:41.465 "zoned": false, 00:09:41.465 "supported_io_types": { 00:09:41.465 "read": true, 00:09:41.465 "write": true, 00:09:41.465 "unmap": true, 00:09:41.465 "flush": true, 00:09:41.465 "reset": true, 00:09:41.465 "nvme_admin": false, 00:09:41.465 "nvme_io": false, 00:09:41.465 "nvme_io_md": false, 00:09:41.465 "write_zeroes": true, 00:09:41.465 "zcopy": true, 00:09:41.465 "get_zone_info": false, 00:09:41.465 "zone_management": false, 00:09:41.465 "zone_append": false, 00:09:41.465 "compare": false, 00:09:41.465 "compare_and_write": false, 00:09:41.465 "abort": true, 00:09:41.465 "seek_hole": false, 00:09:41.465 "seek_data": false, 00:09:41.465 "copy": true, 00:09:41.465 "nvme_iov_md": false 00:09:41.465 }, 00:09:41.465 "memory_domains": [ 00:09:41.465 { 00:09:41.465 "dma_device_id": "system", 00:09:41.465 "dma_device_type": 1 00:09:41.465 }, 00:09:41.465 { 00:09:41.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.465 "dma_device_type": 2 00:09:41.465 } 00:09:41.465 ], 00:09:41.465 "driver_specific": {} 00:09:41.465 }' 00:09:41.465 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:41.465 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:41.465 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:41.465 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:41.725 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:41.725 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:41.725 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:41.725 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:41.725 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:41.725 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:41.725 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:41.725 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:41.725 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:41.725 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:41.725 06:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:41.983 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:41.983 "name": "BaseBdev2", 00:09:41.983 "aliases": [ 00:09:41.983 "7e729aae-7030-4cfc-b3d5-b5c82d943923" 00:09:41.983 ], 00:09:41.983 "product_name": "Malloc disk", 00:09:41.984 "block_size": 512, 00:09:41.984 "num_blocks": 65536, 00:09:41.984 "uuid": "7e729aae-7030-4cfc-b3d5-b5c82d943923", 00:09:41.984 "assigned_rate_limits": { 00:09:41.984 "rw_ios_per_sec": 0, 00:09:41.984 "rw_mbytes_per_sec": 0, 00:09:41.984 "r_mbytes_per_sec": 0, 00:09:41.984 "w_mbytes_per_sec": 0 00:09:41.984 }, 00:09:41.984 "claimed": true, 00:09:41.984 "claim_type": "exclusive_write", 00:09:41.984 "zoned": false, 00:09:41.984 "supported_io_types": { 00:09:41.984 "read": true, 00:09:41.984 "write": true, 00:09:41.984 "unmap": true, 00:09:41.984 "flush": true, 00:09:41.984 "reset": true, 00:09:41.984 "nvme_admin": false, 00:09:41.984 "nvme_io": false, 00:09:41.984 "nvme_io_md": false, 00:09:41.984 "write_zeroes": true, 00:09:41.984 "zcopy": true, 00:09:41.984 "get_zone_info": false, 00:09:41.984 "zone_management": false, 00:09:41.984 "zone_append": false, 00:09:41.984 "compare": false, 00:09:41.984 "compare_and_write": false, 00:09:41.984 "abort": true, 00:09:41.984 "seek_hole": false, 00:09:41.984 "seek_data": false, 00:09:41.984 "copy": true, 00:09:41.984 "nvme_iov_md": false 00:09:41.984 }, 00:09:41.984 "memory_domains": [ 00:09:41.984 { 00:09:41.984 "dma_device_id": "system", 00:09:41.984 "dma_device_type": 1 00:09:41.984 }, 00:09:41.984 { 00:09:41.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.984 "dma_device_type": 2 00:09:41.984 } 00:09:41.984 ], 00:09:41.984 "driver_specific": {} 00:09:41.984 }' 00:09:41.984 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:41.984 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:42.243 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:42.243 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:42.243 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:42.243 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:42.243 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:42.243 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:42.243 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:42.243 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:42.243 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:42.243 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:42.243 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:42.243 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:42.243 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:42.502 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:42.502 "name": "BaseBdev3", 00:09:42.502 "aliases": [ 00:09:42.502 "3671529b-5dbe-4f4f-827a-6aed55f68416" 00:09:42.502 ], 00:09:42.502 "product_name": "Malloc disk", 00:09:42.502 "block_size": 512, 00:09:42.502 "num_blocks": 65536, 00:09:42.502 "uuid": "3671529b-5dbe-4f4f-827a-6aed55f68416", 00:09:42.502 "assigned_rate_limits": { 00:09:42.502 "rw_ios_per_sec": 0, 00:09:42.502 "rw_mbytes_per_sec": 0, 00:09:42.502 "r_mbytes_per_sec": 0, 00:09:42.502 "w_mbytes_per_sec": 0 00:09:42.502 }, 00:09:42.502 "claimed": true, 00:09:42.502 "claim_type": "exclusive_write", 00:09:42.502 "zoned": false, 00:09:42.502 "supported_io_types": { 00:09:42.502 "read": true, 00:09:42.502 "write": true, 00:09:42.502 "unmap": true, 00:09:42.502 "flush": true, 00:09:42.502 "reset": true, 00:09:42.502 "nvme_admin": false, 00:09:42.502 "nvme_io": false, 00:09:42.502 "nvme_io_md": false, 00:09:42.502 "write_zeroes": true, 00:09:42.502 "zcopy": true, 00:09:42.502 "get_zone_info": false, 00:09:42.502 "zone_management": false, 00:09:42.502 "zone_append": false, 00:09:42.502 "compare": false, 00:09:42.502 "compare_and_write": false, 00:09:42.502 "abort": true, 00:09:42.502 "seek_hole": false, 00:09:42.502 "seek_data": false, 00:09:42.502 "copy": true, 00:09:42.502 "nvme_iov_md": false 00:09:42.502 }, 00:09:42.502 "memory_domains": [ 00:09:42.502 { 00:09:42.502 "dma_device_id": "system", 00:09:42.502 "dma_device_type": 1 00:09:42.502 }, 00:09:42.502 { 00:09:42.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.502 "dma_device_type": 2 00:09:42.502 } 00:09:42.502 ], 00:09:42.502 "driver_specific": {} 00:09:42.502 }' 00:09:42.502 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:42.502 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:42.502 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:42.502 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:42.762 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:42.762 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:42.762 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:42.762 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:42.762 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:42.762 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:42.762 06:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:43.021 [2024-08-14 06:41:10.189179] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.021 [2024-08-14 06:41:10.189224] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.021 [2024-08-14 06:41:10.189279] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.021 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.299 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:43.299 "name": "Existed_Raid", 00:09:43.299 "uuid": "7cdd89ae-f3e1-45c8-b366-a9ee693a7f57", 00:09:43.299 "strip_size_kb": 64, 00:09:43.299 "state": "offline", 00:09:43.299 "raid_level": "raid0", 00:09:43.299 "superblock": true, 00:09:43.299 "num_base_bdevs": 3, 00:09:43.299 "num_base_bdevs_discovered": 2, 00:09:43.299 "num_base_bdevs_operational": 2, 00:09:43.299 "base_bdevs_list": [ 00:09:43.299 { 00:09:43.299 "name": null, 00:09:43.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.299 "is_configured": false, 00:09:43.299 "data_offset": 2048, 00:09:43.299 "data_size": 63488 00:09:43.299 }, 00:09:43.299 { 00:09:43.299 "name": "BaseBdev2", 00:09:43.299 "uuid": "7e729aae-7030-4cfc-b3d5-b5c82d943923", 00:09:43.299 "is_configured": true, 00:09:43.299 "data_offset": 2048, 00:09:43.299 "data_size": 63488 00:09:43.299 }, 00:09:43.299 { 00:09:43.299 "name": "BaseBdev3", 00:09:43.299 "uuid": "3671529b-5dbe-4f4f-827a-6aed55f68416", 00:09:43.299 "is_configured": true, 00:09:43.299 "data_offset": 2048, 00:09:43.299 "data_size": 63488 00:09:43.299 } 00:09:43.299 ] 00:09:43.299 }' 00:09:43.299 06:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:43.299 06:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.928 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:43.928 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:43.928 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.928 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:44.186 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:44.186 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.186 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:44.445 [2024-08-14 06:41:11.454806] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:44.445 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:44.445 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:44.445 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.445 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:44.704 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:44.704 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.704 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:44.704 [2024-08-14 06:41:11.897994] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:44.704 [2024-08-14 06:41:11.898064] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:44.704 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:44.704 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:44.704 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.704 06:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:44.962 06:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:44.963 06:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:44.963 06:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:09:44.963 06:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:09:44.963 06:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:44.963 06:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.221 BaseBdev2 00:09:45.221 06:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:09:45.221 06:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:09:45.221 06:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:45.221 06:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:45.221 06:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:45.221 06:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:45.221 06:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:45.480 06:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:45.740 [ 00:09:45.740 { 00:09:45.740 "name": "BaseBdev2", 00:09:45.740 "aliases": [ 00:09:45.740 "7852ba30-6e7a-4a5c-9371-d7227404f202" 00:09:45.740 ], 00:09:45.740 "product_name": "Malloc disk", 00:09:45.740 "block_size": 512, 00:09:45.740 "num_blocks": 65536, 00:09:45.740 "uuid": "7852ba30-6e7a-4a5c-9371-d7227404f202", 00:09:45.740 "assigned_rate_limits": { 00:09:45.740 "rw_ios_per_sec": 0, 00:09:45.740 "rw_mbytes_per_sec": 0, 00:09:45.740 "r_mbytes_per_sec": 0, 00:09:45.740 "w_mbytes_per_sec": 0 00:09:45.740 }, 00:09:45.740 "claimed": false, 00:09:45.740 "zoned": false, 00:09:45.740 "supported_io_types": { 00:09:45.740 "read": true, 00:09:45.740 "write": true, 00:09:45.740 "unmap": true, 00:09:45.740 "flush": true, 00:09:45.740 "reset": true, 00:09:45.740 "nvme_admin": false, 00:09:45.740 "nvme_io": false, 00:09:45.740 "nvme_io_md": false, 00:09:45.740 "write_zeroes": true, 00:09:45.740 "zcopy": true, 00:09:45.740 "get_zone_info": false, 00:09:45.740 "zone_management": false, 00:09:45.740 "zone_append": false, 00:09:45.740 "compare": false, 00:09:45.740 "compare_and_write": false, 00:09:45.740 "abort": true, 00:09:45.740 "seek_hole": false, 00:09:45.740 "seek_data": false, 00:09:45.740 "copy": true, 00:09:45.740 "nvme_iov_md": false 00:09:45.740 }, 00:09:45.740 "memory_domains": [ 00:09:45.740 { 00:09:45.740 "dma_device_id": "system", 00:09:45.740 "dma_device_type": 1 00:09:45.740 }, 00:09:45.740 { 00:09:45.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.740 "dma_device_type": 2 00:09:45.740 } 00:09:45.740 ], 00:09:45.740 "driver_specific": {} 00:09:45.740 } 00:09:45.740 ] 00:09:45.740 06:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:45.740 06:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:45.740 06:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:45.740 06:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:45.999 BaseBdev3 00:09:45.999 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:09:45.999 06:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:09:45.999 06:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:45.999 06:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:45.999 06:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:45.999 06:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:45.999 06:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:46.259 06:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:46.259 [ 00:09:46.259 { 00:09:46.259 "name": "BaseBdev3", 00:09:46.259 "aliases": [ 00:09:46.259 "2c9d48ca-1def-4965-baaa-95f1dacaa0fc" 00:09:46.259 ], 00:09:46.259 "product_name": "Malloc disk", 00:09:46.259 "block_size": 512, 00:09:46.259 "num_blocks": 65536, 00:09:46.259 "uuid": "2c9d48ca-1def-4965-baaa-95f1dacaa0fc", 00:09:46.259 "assigned_rate_limits": { 00:09:46.260 "rw_ios_per_sec": 0, 00:09:46.260 "rw_mbytes_per_sec": 0, 00:09:46.260 "r_mbytes_per_sec": 0, 00:09:46.260 "w_mbytes_per_sec": 0 00:09:46.260 }, 00:09:46.260 "claimed": false, 00:09:46.260 "zoned": false, 00:09:46.260 "supported_io_types": { 00:09:46.260 "read": true, 00:09:46.260 "write": true, 00:09:46.260 "unmap": true, 00:09:46.260 "flush": true, 00:09:46.260 "reset": true, 00:09:46.260 "nvme_admin": false, 00:09:46.260 "nvme_io": false, 00:09:46.260 "nvme_io_md": false, 00:09:46.260 "write_zeroes": true, 00:09:46.260 "zcopy": true, 00:09:46.260 "get_zone_info": false, 00:09:46.260 "zone_management": false, 00:09:46.260 "zone_append": false, 00:09:46.260 "compare": false, 00:09:46.260 "compare_and_write": false, 00:09:46.260 "abort": true, 00:09:46.260 "seek_hole": false, 00:09:46.260 "seek_data": false, 00:09:46.260 "copy": true, 00:09:46.260 "nvme_iov_md": false 00:09:46.260 }, 00:09:46.260 "memory_domains": [ 00:09:46.260 { 00:09:46.260 "dma_device_id": "system", 00:09:46.260 "dma_device_type": 1 00:09:46.260 }, 00:09:46.260 { 00:09:46.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.260 "dma_device_type": 2 00:09:46.260 } 00:09:46.260 ], 00:09:46.260 "driver_specific": {} 00:09:46.260 } 00:09:46.260 ] 00:09:46.260 06:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:46.260 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:46.260 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:46.260 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:46.518 [2024-08-14 06:41:13.648694] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:46.518 [2024-08-14 06:41:13.648839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:46.519 [2024-08-14 06:41:13.648886] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.519 [2024-08-14 06:41:13.650778] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.519 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:46.519 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:46.519 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:46.519 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:46.519 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:46.519 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:46.519 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:46.519 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:46.519 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:46.519 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:46.519 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.519 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.778 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:46.778 "name": "Existed_Raid", 00:09:46.778 "uuid": "7c65695b-2548-4558-a0a5-adbce6784d3b", 00:09:46.778 "strip_size_kb": 64, 00:09:46.778 "state": "configuring", 00:09:46.778 "raid_level": "raid0", 00:09:46.778 "superblock": true, 00:09:46.778 "num_base_bdevs": 3, 00:09:46.778 "num_base_bdevs_discovered": 2, 00:09:46.778 "num_base_bdevs_operational": 3, 00:09:46.778 "base_bdevs_list": [ 00:09:46.778 { 00:09:46.778 "name": "BaseBdev1", 00:09:46.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.778 "is_configured": false, 00:09:46.778 "data_offset": 0, 00:09:46.778 "data_size": 0 00:09:46.778 }, 00:09:46.778 { 00:09:46.778 "name": "BaseBdev2", 00:09:46.778 "uuid": "7852ba30-6e7a-4a5c-9371-d7227404f202", 00:09:46.778 "is_configured": true, 00:09:46.778 "data_offset": 2048, 00:09:46.778 "data_size": 63488 00:09:46.778 }, 00:09:46.778 { 00:09:46.778 "name": "BaseBdev3", 00:09:46.778 "uuid": "2c9d48ca-1def-4965-baaa-95f1dacaa0fc", 00:09:46.778 "is_configured": true, 00:09:46.778 "data_offset": 2048, 00:09:46.778 "data_size": 63488 00:09:46.778 } 00:09:46.778 ] 00:09:46.778 }' 00:09:46.778 06:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:46.778 06:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.348 06:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:09:47.608 [2024-08-14 06:41:14.658953] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:47.608 06:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:47.608 06:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:47.608 06:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:47.608 06:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:47.608 06:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:47.608 06:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:47.608 06:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:47.608 06:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:47.608 06:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:47.608 06:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:47.608 06:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.608 06:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.867 06:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:47.868 "name": "Existed_Raid", 00:09:47.868 "uuid": "7c65695b-2548-4558-a0a5-adbce6784d3b", 00:09:47.868 "strip_size_kb": 64, 00:09:47.868 "state": "configuring", 00:09:47.868 "raid_level": "raid0", 00:09:47.868 "superblock": true, 00:09:47.868 "num_base_bdevs": 3, 00:09:47.868 "num_base_bdevs_discovered": 1, 00:09:47.868 "num_base_bdevs_operational": 3, 00:09:47.868 "base_bdevs_list": [ 00:09:47.868 { 00:09:47.868 "name": "BaseBdev1", 00:09:47.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.868 "is_configured": false, 00:09:47.868 "data_offset": 0, 00:09:47.868 "data_size": 0 00:09:47.868 }, 00:09:47.868 { 00:09:47.868 "name": null, 00:09:47.868 "uuid": "7852ba30-6e7a-4a5c-9371-d7227404f202", 00:09:47.868 "is_configured": false, 00:09:47.868 "data_offset": 2048, 00:09:47.868 "data_size": 63488 00:09:47.868 }, 00:09:47.868 { 00:09:47.868 "name": "BaseBdev3", 00:09:47.868 "uuid": "2c9d48ca-1def-4965-baaa-95f1dacaa0fc", 00:09:47.868 "is_configured": true, 00:09:47.868 "data_offset": 2048, 00:09:47.868 "data_size": 63488 00:09:47.868 } 00:09:47.868 ] 00:09:47.868 }' 00:09:47.868 06:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:47.868 06:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.436 06:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:48.436 06:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.437 06:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:09:48.437 06:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:48.696 [2024-08-14 06:41:15.840006] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.696 BaseBdev1 00:09:48.696 06:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:09:48.696 06:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:09:48.696 06:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:48.696 06:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:48.696 06:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:48.696 06:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:48.696 06:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:48.962 06:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:49.228 [ 00:09:49.228 { 00:09:49.228 "name": "BaseBdev1", 00:09:49.228 "aliases": [ 00:09:49.228 "e39d9959-c3dd-46fc-a626-477201f57636" 00:09:49.228 ], 00:09:49.228 "product_name": "Malloc disk", 00:09:49.228 "block_size": 512, 00:09:49.228 "num_blocks": 65536, 00:09:49.228 "uuid": "e39d9959-c3dd-46fc-a626-477201f57636", 00:09:49.228 "assigned_rate_limits": { 00:09:49.228 "rw_ios_per_sec": 0, 00:09:49.228 "rw_mbytes_per_sec": 0, 00:09:49.228 "r_mbytes_per_sec": 0, 00:09:49.228 "w_mbytes_per_sec": 0 00:09:49.228 }, 00:09:49.228 "claimed": true, 00:09:49.229 "claim_type": "exclusive_write", 00:09:49.229 "zoned": false, 00:09:49.229 "supported_io_types": { 00:09:49.229 "read": true, 00:09:49.229 "write": true, 00:09:49.229 "unmap": true, 00:09:49.229 "flush": true, 00:09:49.229 "reset": true, 00:09:49.229 "nvme_admin": false, 00:09:49.229 "nvme_io": false, 00:09:49.229 "nvme_io_md": false, 00:09:49.229 "write_zeroes": true, 00:09:49.229 "zcopy": true, 00:09:49.229 "get_zone_info": false, 00:09:49.229 "zone_management": false, 00:09:49.229 "zone_append": false, 00:09:49.229 "compare": false, 00:09:49.229 "compare_and_write": false, 00:09:49.229 "abort": true, 00:09:49.229 "seek_hole": false, 00:09:49.229 "seek_data": false, 00:09:49.229 "copy": true, 00:09:49.229 "nvme_iov_md": false 00:09:49.229 }, 00:09:49.229 "memory_domains": [ 00:09:49.229 { 00:09:49.229 "dma_device_id": "system", 00:09:49.229 "dma_device_type": 1 00:09:49.229 }, 00:09:49.229 { 00:09:49.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.229 "dma_device_type": 2 00:09:49.229 } 00:09:49.229 ], 00:09:49.229 "driver_specific": {} 00:09:49.229 } 00:09:49.229 ] 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:49.229 "name": "Existed_Raid", 00:09:49.229 "uuid": "7c65695b-2548-4558-a0a5-adbce6784d3b", 00:09:49.229 "strip_size_kb": 64, 00:09:49.229 "state": "configuring", 00:09:49.229 "raid_level": "raid0", 00:09:49.229 "superblock": true, 00:09:49.229 "num_base_bdevs": 3, 00:09:49.229 "num_base_bdevs_discovered": 2, 00:09:49.229 "num_base_bdevs_operational": 3, 00:09:49.229 "base_bdevs_list": [ 00:09:49.229 { 00:09:49.229 "name": "BaseBdev1", 00:09:49.229 "uuid": "e39d9959-c3dd-46fc-a626-477201f57636", 00:09:49.229 "is_configured": true, 00:09:49.229 "data_offset": 2048, 00:09:49.229 "data_size": 63488 00:09:49.229 }, 00:09:49.229 { 00:09:49.229 "name": null, 00:09:49.229 "uuid": "7852ba30-6e7a-4a5c-9371-d7227404f202", 00:09:49.229 "is_configured": false, 00:09:49.229 "data_offset": 2048, 00:09:49.229 "data_size": 63488 00:09:49.229 }, 00:09:49.229 { 00:09:49.229 "name": "BaseBdev3", 00:09:49.229 "uuid": "2c9d48ca-1def-4965-baaa-95f1dacaa0fc", 00:09:49.229 "is_configured": true, 00:09:49.229 "data_offset": 2048, 00:09:49.229 "data_size": 63488 00:09:49.229 } 00:09:49.229 ] 00:09:49.229 }' 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:49.229 06:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.797 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:49.797 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:50.055 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:50.056 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:50.315 [2024-08-14 06:41:17.369467] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:50.315 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:50.315 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:50.315 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:50.315 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:50.315 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:50.315 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:50.315 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:50.315 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:50.315 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:50.315 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:50.315 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:50.315 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.575 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:50.575 "name": "Existed_Raid", 00:09:50.575 "uuid": "7c65695b-2548-4558-a0a5-adbce6784d3b", 00:09:50.575 "strip_size_kb": 64, 00:09:50.575 "state": "configuring", 00:09:50.575 "raid_level": "raid0", 00:09:50.575 "superblock": true, 00:09:50.575 "num_base_bdevs": 3, 00:09:50.575 "num_base_bdevs_discovered": 1, 00:09:50.575 "num_base_bdevs_operational": 3, 00:09:50.575 "base_bdevs_list": [ 00:09:50.575 { 00:09:50.575 "name": "BaseBdev1", 00:09:50.575 "uuid": "e39d9959-c3dd-46fc-a626-477201f57636", 00:09:50.575 "is_configured": true, 00:09:50.575 "data_offset": 2048, 00:09:50.575 "data_size": 63488 00:09:50.575 }, 00:09:50.575 { 00:09:50.575 "name": null, 00:09:50.575 "uuid": "7852ba30-6e7a-4a5c-9371-d7227404f202", 00:09:50.575 "is_configured": false, 00:09:50.575 "data_offset": 2048, 00:09:50.575 "data_size": 63488 00:09:50.575 }, 00:09:50.575 { 00:09:50.575 "name": null, 00:09:50.575 "uuid": "2c9d48ca-1def-4965-baaa-95f1dacaa0fc", 00:09:50.575 "is_configured": false, 00:09:50.575 "data_offset": 2048, 00:09:50.575 "data_size": 63488 00:09:50.575 } 00:09:50.575 ] 00:09:50.575 }' 00:09:50.575 06:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:50.575 06:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.146 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.146 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:51.146 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:51.146 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:51.406 [2024-08-14 06:41:18.567494] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.406 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:51.406 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:51.406 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:51.406 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:51.406 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:51.406 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:51.406 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:51.406 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:51.406 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:51.406 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:51.406 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.406 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.665 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:51.665 "name": "Existed_Raid", 00:09:51.665 "uuid": "7c65695b-2548-4558-a0a5-adbce6784d3b", 00:09:51.665 "strip_size_kb": 64, 00:09:51.665 "state": "configuring", 00:09:51.665 "raid_level": "raid0", 00:09:51.665 "superblock": true, 00:09:51.665 "num_base_bdevs": 3, 00:09:51.665 "num_base_bdevs_discovered": 2, 00:09:51.665 "num_base_bdevs_operational": 3, 00:09:51.665 "base_bdevs_list": [ 00:09:51.665 { 00:09:51.665 "name": "BaseBdev1", 00:09:51.665 "uuid": "e39d9959-c3dd-46fc-a626-477201f57636", 00:09:51.665 "is_configured": true, 00:09:51.665 "data_offset": 2048, 00:09:51.665 "data_size": 63488 00:09:51.665 }, 00:09:51.665 { 00:09:51.665 "name": null, 00:09:51.665 "uuid": "7852ba30-6e7a-4a5c-9371-d7227404f202", 00:09:51.665 "is_configured": false, 00:09:51.665 "data_offset": 2048, 00:09:51.665 "data_size": 63488 00:09:51.665 }, 00:09:51.665 { 00:09:51.665 "name": "BaseBdev3", 00:09:51.665 "uuid": "2c9d48ca-1def-4965-baaa-95f1dacaa0fc", 00:09:51.665 "is_configured": true, 00:09:51.665 "data_offset": 2048, 00:09:51.665 "data_size": 63488 00:09:51.665 } 00:09:51.665 ] 00:09:51.665 }' 00:09:51.665 06:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:51.665 06:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.234 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:52.234 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.493 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:52.493 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:52.752 [2024-08-14 06:41:19.809482] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:52.752 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:52.752 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:52.752 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:52.752 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:52.752 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:52.752 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:52.752 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:52.752 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:52.752 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:52.752 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:52.752 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.752 06:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.011 06:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:53.011 "name": "Existed_Raid", 00:09:53.011 "uuid": "7c65695b-2548-4558-a0a5-adbce6784d3b", 00:09:53.011 "strip_size_kb": 64, 00:09:53.011 "state": "configuring", 00:09:53.011 "raid_level": "raid0", 00:09:53.011 "superblock": true, 00:09:53.011 "num_base_bdevs": 3, 00:09:53.011 "num_base_bdevs_discovered": 1, 00:09:53.011 "num_base_bdevs_operational": 3, 00:09:53.011 "base_bdevs_list": [ 00:09:53.011 { 00:09:53.011 "name": null, 00:09:53.011 "uuid": "e39d9959-c3dd-46fc-a626-477201f57636", 00:09:53.011 "is_configured": false, 00:09:53.011 "data_offset": 2048, 00:09:53.011 "data_size": 63488 00:09:53.011 }, 00:09:53.011 { 00:09:53.011 "name": null, 00:09:53.011 "uuid": "7852ba30-6e7a-4a5c-9371-d7227404f202", 00:09:53.011 "is_configured": false, 00:09:53.011 "data_offset": 2048, 00:09:53.011 "data_size": 63488 00:09:53.011 }, 00:09:53.011 { 00:09:53.011 "name": "BaseBdev3", 00:09:53.011 "uuid": "2c9d48ca-1def-4965-baaa-95f1dacaa0fc", 00:09:53.011 "is_configured": true, 00:09:53.011 "data_offset": 2048, 00:09:53.011 "data_size": 63488 00:09:53.011 } 00:09:53.011 ] 00:09:53.011 }' 00:09:53.011 06:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:53.011 06:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.581 06:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.581 06:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:53.581 06:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:53.581 06:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:53.841 [2024-08-14 06:41:20.994582] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.841 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:53.841 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:53.841 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:53.841 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:53.841 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:53.841 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:53.841 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:53.841 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:53.841 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:53.841 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:53.841 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.841 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.101 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:54.101 "name": "Existed_Raid", 00:09:54.101 "uuid": "7c65695b-2548-4558-a0a5-adbce6784d3b", 00:09:54.101 "strip_size_kb": 64, 00:09:54.101 "state": "configuring", 00:09:54.101 "raid_level": "raid0", 00:09:54.101 "superblock": true, 00:09:54.101 "num_base_bdevs": 3, 00:09:54.101 "num_base_bdevs_discovered": 2, 00:09:54.101 "num_base_bdevs_operational": 3, 00:09:54.101 "base_bdevs_list": [ 00:09:54.101 { 00:09:54.101 "name": null, 00:09:54.101 "uuid": "e39d9959-c3dd-46fc-a626-477201f57636", 00:09:54.101 "is_configured": false, 00:09:54.101 "data_offset": 2048, 00:09:54.101 "data_size": 63488 00:09:54.101 }, 00:09:54.101 { 00:09:54.101 "name": "BaseBdev2", 00:09:54.101 "uuid": "7852ba30-6e7a-4a5c-9371-d7227404f202", 00:09:54.101 "is_configured": true, 00:09:54.101 "data_offset": 2048, 00:09:54.101 "data_size": 63488 00:09:54.101 }, 00:09:54.101 { 00:09:54.101 "name": "BaseBdev3", 00:09:54.101 "uuid": "2c9d48ca-1def-4965-baaa-95f1dacaa0fc", 00:09:54.101 "is_configured": true, 00:09:54.101 "data_offset": 2048, 00:09:54.101 "data_size": 63488 00:09:54.101 } 00:09:54.101 ] 00:09:54.101 }' 00:09:54.101 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:54.101 06:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.671 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.671 06:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:54.931 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:54.931 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.931 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:55.191 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e39d9959-c3dd-46fc-a626-477201f57636 00:09:55.451 [2024-08-14 06:41:22.502985] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:55.451 [2024-08-14 06:41:22.503191] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:55.451 [2024-08-14 06:41:22.503205] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:55.451 [2024-08-14 06:41:22.503474] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:55.451 [2024-08-14 06:41:22.503592] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:55.451 [2024-08-14 06:41:22.503604] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:55.451 NewBaseBdev 00:09:55.451 [2024-08-14 06:41:22.503706] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.451 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:55.451 06:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:09:55.451 06:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:55.451 06:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:55.451 06:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:55.451 06:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:55.451 06:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:55.710 06:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:55.710 [ 00:09:55.710 { 00:09:55.710 "name": "NewBaseBdev", 00:09:55.710 "aliases": [ 00:09:55.710 "e39d9959-c3dd-46fc-a626-477201f57636" 00:09:55.710 ], 00:09:55.710 "product_name": "Malloc disk", 00:09:55.710 "block_size": 512, 00:09:55.710 "num_blocks": 65536, 00:09:55.710 "uuid": "e39d9959-c3dd-46fc-a626-477201f57636", 00:09:55.710 "assigned_rate_limits": { 00:09:55.710 "rw_ios_per_sec": 0, 00:09:55.710 "rw_mbytes_per_sec": 0, 00:09:55.710 "r_mbytes_per_sec": 0, 00:09:55.710 "w_mbytes_per_sec": 0 00:09:55.710 }, 00:09:55.710 "claimed": true, 00:09:55.710 "claim_type": "exclusive_write", 00:09:55.710 "zoned": false, 00:09:55.710 "supported_io_types": { 00:09:55.710 "read": true, 00:09:55.710 "write": true, 00:09:55.710 "unmap": true, 00:09:55.710 "flush": true, 00:09:55.710 "reset": true, 00:09:55.710 "nvme_admin": false, 00:09:55.710 "nvme_io": false, 00:09:55.710 "nvme_io_md": false, 00:09:55.710 "write_zeroes": true, 00:09:55.710 "zcopy": true, 00:09:55.710 "get_zone_info": false, 00:09:55.710 "zone_management": false, 00:09:55.710 "zone_append": false, 00:09:55.710 "compare": false, 00:09:55.710 "compare_and_write": false, 00:09:55.710 "abort": true, 00:09:55.710 "seek_hole": false, 00:09:55.710 "seek_data": false, 00:09:55.710 "copy": true, 00:09:55.710 "nvme_iov_md": false 00:09:55.710 }, 00:09:55.710 "memory_domains": [ 00:09:55.710 { 00:09:55.710 "dma_device_id": "system", 00:09:55.710 "dma_device_type": 1 00:09:55.710 }, 00:09:55.710 { 00:09:55.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.710 "dma_device_type": 2 00:09:55.710 } 00:09:55.710 ], 00:09:55.710 "driver_specific": {} 00:09:55.710 } 00:09:55.710 ] 00:09:55.710 06:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:55.710 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:55.710 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:55.710 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:55.710 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:55.710 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:55.710 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:55.710 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:55.710 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:55.710 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:55.710 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:55.710 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:55.710 06:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.970 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:55.970 "name": "Existed_Raid", 00:09:55.970 "uuid": "7c65695b-2548-4558-a0a5-adbce6784d3b", 00:09:55.970 "strip_size_kb": 64, 00:09:55.970 "state": "online", 00:09:55.970 "raid_level": "raid0", 00:09:55.970 "superblock": true, 00:09:55.970 "num_base_bdevs": 3, 00:09:55.970 "num_base_bdevs_discovered": 3, 00:09:55.970 "num_base_bdevs_operational": 3, 00:09:55.970 "base_bdevs_list": [ 00:09:55.970 { 00:09:55.970 "name": "NewBaseBdev", 00:09:55.970 "uuid": "e39d9959-c3dd-46fc-a626-477201f57636", 00:09:55.970 "is_configured": true, 00:09:55.970 "data_offset": 2048, 00:09:55.970 "data_size": 63488 00:09:55.970 }, 00:09:55.970 { 00:09:55.970 "name": "BaseBdev2", 00:09:55.970 "uuid": "7852ba30-6e7a-4a5c-9371-d7227404f202", 00:09:55.970 "is_configured": true, 00:09:55.970 "data_offset": 2048, 00:09:55.970 "data_size": 63488 00:09:55.970 }, 00:09:55.970 { 00:09:55.970 "name": "BaseBdev3", 00:09:55.970 "uuid": "2c9d48ca-1def-4965-baaa-95f1dacaa0fc", 00:09:55.970 "is_configured": true, 00:09:55.970 "data_offset": 2048, 00:09:55.970 "data_size": 63488 00:09:55.970 } 00:09:55.970 ] 00:09:55.970 }' 00:09:55.970 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:55.970 06:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.539 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:09:56.539 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:56.539 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:56.539 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:56.539 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:56.539 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:56.539 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:56.539 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:56.798 [2024-08-14 06:41:23.880991] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.798 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:56.798 "name": "Existed_Raid", 00:09:56.798 "aliases": [ 00:09:56.798 "7c65695b-2548-4558-a0a5-adbce6784d3b" 00:09:56.798 ], 00:09:56.798 "product_name": "Raid Volume", 00:09:56.798 "block_size": 512, 00:09:56.798 "num_blocks": 190464, 00:09:56.798 "uuid": "7c65695b-2548-4558-a0a5-adbce6784d3b", 00:09:56.798 "assigned_rate_limits": { 00:09:56.798 "rw_ios_per_sec": 0, 00:09:56.798 "rw_mbytes_per_sec": 0, 00:09:56.798 "r_mbytes_per_sec": 0, 00:09:56.798 "w_mbytes_per_sec": 0 00:09:56.798 }, 00:09:56.798 "claimed": false, 00:09:56.798 "zoned": false, 00:09:56.798 "supported_io_types": { 00:09:56.798 "read": true, 00:09:56.798 "write": true, 00:09:56.798 "unmap": true, 00:09:56.798 "flush": true, 00:09:56.798 "reset": true, 00:09:56.798 "nvme_admin": false, 00:09:56.798 "nvme_io": false, 00:09:56.798 "nvme_io_md": false, 00:09:56.798 "write_zeroes": true, 00:09:56.798 "zcopy": false, 00:09:56.798 "get_zone_info": false, 00:09:56.798 "zone_management": false, 00:09:56.798 "zone_append": false, 00:09:56.798 "compare": false, 00:09:56.798 "compare_and_write": false, 00:09:56.798 "abort": false, 00:09:56.798 "seek_hole": false, 00:09:56.798 "seek_data": false, 00:09:56.798 "copy": false, 00:09:56.798 "nvme_iov_md": false 00:09:56.798 }, 00:09:56.798 "memory_domains": [ 00:09:56.798 { 00:09:56.798 "dma_device_id": "system", 00:09:56.798 "dma_device_type": 1 00:09:56.798 }, 00:09:56.798 { 00:09:56.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.798 "dma_device_type": 2 00:09:56.798 }, 00:09:56.798 { 00:09:56.798 "dma_device_id": "system", 00:09:56.798 "dma_device_type": 1 00:09:56.798 }, 00:09:56.798 { 00:09:56.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.798 "dma_device_type": 2 00:09:56.798 }, 00:09:56.798 { 00:09:56.798 "dma_device_id": "system", 00:09:56.798 "dma_device_type": 1 00:09:56.798 }, 00:09:56.798 { 00:09:56.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.798 "dma_device_type": 2 00:09:56.798 } 00:09:56.799 ], 00:09:56.799 "driver_specific": { 00:09:56.799 "raid": { 00:09:56.799 "uuid": "7c65695b-2548-4558-a0a5-adbce6784d3b", 00:09:56.799 "strip_size_kb": 64, 00:09:56.799 "state": "online", 00:09:56.799 "raid_level": "raid0", 00:09:56.799 "superblock": true, 00:09:56.799 "num_base_bdevs": 3, 00:09:56.799 "num_base_bdevs_discovered": 3, 00:09:56.799 "num_base_bdevs_operational": 3, 00:09:56.799 "base_bdevs_list": [ 00:09:56.799 { 00:09:56.799 "name": "NewBaseBdev", 00:09:56.799 "uuid": "e39d9959-c3dd-46fc-a626-477201f57636", 00:09:56.799 "is_configured": true, 00:09:56.799 "data_offset": 2048, 00:09:56.799 "data_size": 63488 00:09:56.799 }, 00:09:56.799 { 00:09:56.799 "name": "BaseBdev2", 00:09:56.799 "uuid": "7852ba30-6e7a-4a5c-9371-d7227404f202", 00:09:56.799 "is_configured": true, 00:09:56.799 "data_offset": 2048, 00:09:56.799 "data_size": 63488 00:09:56.799 }, 00:09:56.799 { 00:09:56.799 "name": "BaseBdev3", 00:09:56.799 "uuid": "2c9d48ca-1def-4965-baaa-95f1dacaa0fc", 00:09:56.799 "is_configured": true, 00:09:56.799 "data_offset": 2048, 00:09:56.799 "data_size": 63488 00:09:56.799 } 00:09:56.799 ] 00:09:56.799 } 00:09:56.799 } 00:09:56.799 }' 00:09:56.799 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.799 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:09:56.799 BaseBdev2 00:09:56.799 BaseBdev3' 00:09:56.799 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:56.799 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:09:56.799 06:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:57.058 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:57.058 "name": "NewBaseBdev", 00:09:57.058 "aliases": [ 00:09:57.058 "e39d9959-c3dd-46fc-a626-477201f57636" 00:09:57.058 ], 00:09:57.058 "product_name": "Malloc disk", 00:09:57.058 "block_size": 512, 00:09:57.058 "num_blocks": 65536, 00:09:57.058 "uuid": "e39d9959-c3dd-46fc-a626-477201f57636", 00:09:57.058 "assigned_rate_limits": { 00:09:57.058 "rw_ios_per_sec": 0, 00:09:57.058 "rw_mbytes_per_sec": 0, 00:09:57.058 "r_mbytes_per_sec": 0, 00:09:57.058 "w_mbytes_per_sec": 0 00:09:57.058 }, 00:09:57.058 "claimed": true, 00:09:57.058 "claim_type": "exclusive_write", 00:09:57.058 "zoned": false, 00:09:57.058 "supported_io_types": { 00:09:57.058 "read": true, 00:09:57.058 "write": true, 00:09:57.058 "unmap": true, 00:09:57.058 "flush": true, 00:09:57.058 "reset": true, 00:09:57.058 "nvme_admin": false, 00:09:57.058 "nvme_io": false, 00:09:57.058 "nvme_io_md": false, 00:09:57.058 "write_zeroes": true, 00:09:57.058 "zcopy": true, 00:09:57.058 "get_zone_info": false, 00:09:57.058 "zone_management": false, 00:09:57.058 "zone_append": false, 00:09:57.058 "compare": false, 00:09:57.058 "compare_and_write": false, 00:09:57.058 "abort": true, 00:09:57.058 "seek_hole": false, 00:09:57.058 "seek_data": false, 00:09:57.058 "copy": true, 00:09:57.058 "nvme_iov_md": false 00:09:57.058 }, 00:09:57.058 "memory_domains": [ 00:09:57.058 { 00:09:57.058 "dma_device_id": "system", 00:09:57.058 "dma_device_type": 1 00:09:57.058 }, 00:09:57.058 { 00:09:57.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.058 "dma_device_type": 2 00:09:57.058 } 00:09:57.058 ], 00:09:57.058 "driver_specific": {} 00:09:57.058 }' 00:09:57.058 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:57.058 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:57.058 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:57.058 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:57.058 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:57.058 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:57.058 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:57.317 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:57.317 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:57.317 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:57.317 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:57.317 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:57.317 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:57.317 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:57.317 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:57.576 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:57.576 "name": "BaseBdev2", 00:09:57.576 "aliases": [ 00:09:57.576 "7852ba30-6e7a-4a5c-9371-d7227404f202" 00:09:57.576 ], 00:09:57.576 "product_name": "Malloc disk", 00:09:57.576 "block_size": 512, 00:09:57.576 "num_blocks": 65536, 00:09:57.576 "uuid": "7852ba30-6e7a-4a5c-9371-d7227404f202", 00:09:57.576 "assigned_rate_limits": { 00:09:57.576 "rw_ios_per_sec": 0, 00:09:57.576 "rw_mbytes_per_sec": 0, 00:09:57.576 "r_mbytes_per_sec": 0, 00:09:57.576 "w_mbytes_per_sec": 0 00:09:57.576 }, 00:09:57.576 "claimed": true, 00:09:57.576 "claim_type": "exclusive_write", 00:09:57.576 "zoned": false, 00:09:57.576 "supported_io_types": { 00:09:57.576 "read": true, 00:09:57.576 "write": true, 00:09:57.576 "unmap": true, 00:09:57.576 "flush": true, 00:09:57.576 "reset": true, 00:09:57.576 "nvme_admin": false, 00:09:57.576 "nvme_io": false, 00:09:57.576 "nvme_io_md": false, 00:09:57.576 "write_zeroes": true, 00:09:57.576 "zcopy": true, 00:09:57.576 "get_zone_info": false, 00:09:57.576 "zone_management": false, 00:09:57.576 "zone_append": false, 00:09:57.576 "compare": false, 00:09:57.576 "compare_and_write": false, 00:09:57.576 "abort": true, 00:09:57.576 "seek_hole": false, 00:09:57.576 "seek_data": false, 00:09:57.576 "copy": true, 00:09:57.576 "nvme_iov_md": false 00:09:57.576 }, 00:09:57.576 "memory_domains": [ 00:09:57.576 { 00:09:57.576 "dma_device_id": "system", 00:09:57.576 "dma_device_type": 1 00:09:57.576 }, 00:09:57.576 { 00:09:57.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.576 "dma_device_type": 2 00:09:57.576 } 00:09:57.576 ], 00:09:57.576 "driver_specific": {} 00:09:57.576 }' 00:09:57.576 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:57.576 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:57.835 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:57.835 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:57.835 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:57.836 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:57.836 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:57.836 06:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:57.836 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:57.836 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:57.836 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:58.095 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:58.095 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:58.095 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:58.095 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:58.095 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:58.095 "name": "BaseBdev3", 00:09:58.095 "aliases": [ 00:09:58.095 "2c9d48ca-1def-4965-baaa-95f1dacaa0fc" 00:09:58.095 ], 00:09:58.095 "product_name": "Malloc disk", 00:09:58.095 "block_size": 512, 00:09:58.095 "num_blocks": 65536, 00:09:58.095 "uuid": "2c9d48ca-1def-4965-baaa-95f1dacaa0fc", 00:09:58.095 "assigned_rate_limits": { 00:09:58.095 "rw_ios_per_sec": 0, 00:09:58.095 "rw_mbytes_per_sec": 0, 00:09:58.095 "r_mbytes_per_sec": 0, 00:09:58.095 "w_mbytes_per_sec": 0 00:09:58.095 }, 00:09:58.095 "claimed": true, 00:09:58.095 "claim_type": "exclusive_write", 00:09:58.095 "zoned": false, 00:09:58.095 "supported_io_types": { 00:09:58.095 "read": true, 00:09:58.095 "write": true, 00:09:58.095 "unmap": true, 00:09:58.095 "flush": true, 00:09:58.095 "reset": true, 00:09:58.095 "nvme_admin": false, 00:09:58.095 "nvme_io": false, 00:09:58.095 "nvme_io_md": false, 00:09:58.095 "write_zeroes": true, 00:09:58.095 "zcopy": true, 00:09:58.095 "get_zone_info": false, 00:09:58.095 "zone_management": false, 00:09:58.095 "zone_append": false, 00:09:58.095 "compare": false, 00:09:58.095 "compare_and_write": false, 00:09:58.095 "abort": true, 00:09:58.095 "seek_hole": false, 00:09:58.095 "seek_data": false, 00:09:58.095 "copy": true, 00:09:58.095 "nvme_iov_md": false 00:09:58.095 }, 00:09:58.095 "memory_domains": [ 00:09:58.095 { 00:09:58.095 "dma_device_id": "system", 00:09:58.095 "dma_device_type": 1 00:09:58.095 }, 00:09:58.095 { 00:09:58.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.095 "dma_device_type": 2 00:09:58.095 } 00:09:58.095 ], 00:09:58.095 "driver_specific": {} 00:09:58.095 }' 00:09:58.095 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:58.355 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:58.355 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:58.355 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:58.355 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:58.355 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:58.355 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:58.355 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:58.355 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:58.614 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:58.614 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:58.614 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:58.614 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:58.874 [2024-08-14 06:41:25.877399] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.874 [2024-08-14 06:41:25.877552] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.874 [2024-08-14 06:41:25.877652] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.874 [2024-08-14 06:41:25.877715] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.874 [2024-08-14 06:41:25.877735] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:58.874 06:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 76336 00:09:58.874 06:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 76336 ']' 00:09:58.874 06:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 76336 00:09:58.874 06:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:09:58.874 06:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:58.874 06:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76336 00:09:58.874 killing process with pid 76336 00:09:58.874 06:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:58.874 06:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:58.874 06:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76336' 00:09:58.874 06:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 76336 00:09:58.874 [2024-08-14 06:41:25.941070] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.874 06:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 76336 00:09:58.874 [2024-08-14 06:41:25.972525] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.133 06:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:09:59.133 00:09:59.133 real 0m25.834s 00:09:59.133 user 0m48.118s 00:09:59.133 sys 0m3.821s 00:09:59.133 ************************************ 00:09:59.133 END TEST raid_state_function_test_sb 00:09:59.133 ************************************ 00:09:59.133 06:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:59.133 06:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.133 06:41:26 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:59.133 06:41:26 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:59.133 06:41:26 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:59.133 06:41:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.133 ************************************ 00:09:59.133 START TEST raid_superblock_test 00:09:59.133 ************************************ 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 3 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=77244 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 77244 /var/tmp/spdk-raid.sock 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 77244 ']' 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:59.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:59.133 06:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.133 [2024-08-14 06:41:26.377162] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:09:59.133 [2024-08-14 06:41:26.378107] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77244 ] 00:09:59.392 [2024-08-14 06:41:26.518223] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.392 [2024-08-14 06:41:26.580042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.392 [2024-08-14 06:41:26.622889] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.392 [2024-08-14 06:41:26.623009] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.002 06:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:00.002 06:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:10:00.002 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:10:00.002 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:10:00.002 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:10:00.002 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:10:00.002 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:00.002 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:00.002 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:10:00.002 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:00.002 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:00.259 malloc1 00:10:00.259 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:00.516 [2024-08-14 06:41:27.575385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:00.516 [2024-08-14 06:41:27.575574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.516 [2024-08-14 06:41:27.575635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:00.516 [2024-08-14 06:41:27.575682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.516 [2024-08-14 06:41:27.578132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.516 [2024-08-14 06:41:27.578237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:00.516 pt1 00:10:00.516 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:10:00.516 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:10:00.516 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:10:00.516 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:10:00.516 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:00.516 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:00.516 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:10:00.517 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:00.517 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:00.776 malloc2 00:10:00.776 06:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:00.776 [2024-08-14 06:41:28.027490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:00.776 [2024-08-14 06:41:28.027651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.776 [2024-08-14 06:41:28.027694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:00.776 [2024-08-14 06:41:28.027724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.036 [2024-08-14 06:41:28.030055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.036 [2024-08-14 06:41:28.030135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:01.036 pt2 00:10:01.036 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:10:01.036 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:10:01.036 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:10:01.036 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:10:01.036 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:01.036 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:01.036 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:10:01.036 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:01.036 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:10:01.036 malloc3 00:10:01.036 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:01.296 [2024-08-14 06:41:28.469501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:01.296 [2024-08-14 06:41:28.469571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.296 [2024-08-14 06:41:28.469597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:01.296 [2024-08-14 06:41:28.469606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.296 [2024-08-14 06:41:28.471752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.296 [2024-08-14 06:41:28.471791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:01.296 pt3 00:10:01.296 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:10:01.296 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:10:01.296 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:10:01.555 [2024-08-14 06:41:28.709199] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:01.555 [2024-08-14 06:41:28.711212] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:01.555 [2024-08-14 06:41:28.711285] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:01.556 [2024-08-14 06:41:28.711465] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:01.556 [2024-08-14 06:41:28.711484] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:01.556 [2024-08-14 06:41:28.711826] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:01.556 [2024-08-14 06:41:28.711993] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:01.556 [2024-08-14 06:41:28.712002] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:01.556 [2024-08-14 06:41:28.712155] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.556 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:01.556 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:01.556 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:01.556 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:01.556 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:01.556 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:01.556 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:01.556 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:01.556 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:01.556 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:01.556 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.556 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.815 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:01.815 "name": "raid_bdev1", 00:10:01.815 "uuid": "5a38b756-93ca-4791-b86f-ddfa23b90bdb", 00:10:01.815 "strip_size_kb": 64, 00:10:01.815 "state": "online", 00:10:01.815 "raid_level": "raid0", 00:10:01.815 "superblock": true, 00:10:01.815 "num_base_bdevs": 3, 00:10:01.815 "num_base_bdevs_discovered": 3, 00:10:01.815 "num_base_bdevs_operational": 3, 00:10:01.815 "base_bdevs_list": [ 00:10:01.815 { 00:10:01.815 "name": "pt1", 00:10:01.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.815 "is_configured": true, 00:10:01.815 "data_offset": 2048, 00:10:01.815 "data_size": 63488 00:10:01.815 }, 00:10:01.815 { 00:10:01.815 "name": "pt2", 00:10:01.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.815 "is_configured": true, 00:10:01.815 "data_offset": 2048, 00:10:01.815 "data_size": 63488 00:10:01.815 }, 00:10:01.815 { 00:10:01.815 "name": "pt3", 00:10:01.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:01.815 "is_configured": true, 00:10:01.815 "data_offset": 2048, 00:10:01.815 "data_size": 63488 00:10:01.815 } 00:10:01.815 ] 00:10:01.815 }' 00:10:01.815 06:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:01.815 06:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.383 06:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:10:02.383 06:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:02.383 06:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:02.383 06:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:02.383 06:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:02.383 06:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:02.383 06:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:02.383 06:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:02.642 [2024-08-14 06:41:29.703780] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.642 06:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:02.642 "name": "raid_bdev1", 00:10:02.642 "aliases": [ 00:10:02.642 "5a38b756-93ca-4791-b86f-ddfa23b90bdb" 00:10:02.642 ], 00:10:02.642 "product_name": "Raid Volume", 00:10:02.642 "block_size": 512, 00:10:02.642 "num_blocks": 190464, 00:10:02.642 "uuid": "5a38b756-93ca-4791-b86f-ddfa23b90bdb", 00:10:02.642 "assigned_rate_limits": { 00:10:02.642 "rw_ios_per_sec": 0, 00:10:02.642 "rw_mbytes_per_sec": 0, 00:10:02.642 "r_mbytes_per_sec": 0, 00:10:02.642 "w_mbytes_per_sec": 0 00:10:02.642 }, 00:10:02.642 "claimed": false, 00:10:02.642 "zoned": false, 00:10:02.642 "supported_io_types": { 00:10:02.642 "read": true, 00:10:02.642 "write": true, 00:10:02.642 "unmap": true, 00:10:02.642 "flush": true, 00:10:02.642 "reset": true, 00:10:02.642 "nvme_admin": false, 00:10:02.642 "nvme_io": false, 00:10:02.642 "nvme_io_md": false, 00:10:02.642 "write_zeroes": true, 00:10:02.642 "zcopy": false, 00:10:02.642 "get_zone_info": false, 00:10:02.642 "zone_management": false, 00:10:02.642 "zone_append": false, 00:10:02.642 "compare": false, 00:10:02.642 "compare_and_write": false, 00:10:02.642 "abort": false, 00:10:02.642 "seek_hole": false, 00:10:02.642 "seek_data": false, 00:10:02.642 "copy": false, 00:10:02.642 "nvme_iov_md": false 00:10:02.642 }, 00:10:02.642 "memory_domains": [ 00:10:02.642 { 00:10:02.642 "dma_device_id": "system", 00:10:02.642 "dma_device_type": 1 00:10:02.642 }, 00:10:02.642 { 00:10:02.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.642 "dma_device_type": 2 00:10:02.642 }, 00:10:02.642 { 00:10:02.642 "dma_device_id": "system", 00:10:02.642 "dma_device_type": 1 00:10:02.642 }, 00:10:02.642 { 00:10:02.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.642 "dma_device_type": 2 00:10:02.642 }, 00:10:02.642 { 00:10:02.642 "dma_device_id": "system", 00:10:02.642 "dma_device_type": 1 00:10:02.642 }, 00:10:02.642 { 00:10:02.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.642 "dma_device_type": 2 00:10:02.642 } 00:10:02.642 ], 00:10:02.642 "driver_specific": { 00:10:02.642 "raid": { 00:10:02.642 "uuid": "5a38b756-93ca-4791-b86f-ddfa23b90bdb", 00:10:02.642 "strip_size_kb": 64, 00:10:02.642 "state": "online", 00:10:02.642 "raid_level": "raid0", 00:10:02.642 "superblock": true, 00:10:02.642 "num_base_bdevs": 3, 00:10:02.642 "num_base_bdevs_discovered": 3, 00:10:02.642 "num_base_bdevs_operational": 3, 00:10:02.642 "base_bdevs_list": [ 00:10:02.642 { 00:10:02.642 "name": "pt1", 00:10:02.642 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.642 "is_configured": true, 00:10:02.642 "data_offset": 2048, 00:10:02.642 "data_size": 63488 00:10:02.642 }, 00:10:02.642 { 00:10:02.642 "name": "pt2", 00:10:02.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.642 "is_configured": true, 00:10:02.642 "data_offset": 2048, 00:10:02.642 "data_size": 63488 00:10:02.642 }, 00:10:02.642 { 00:10:02.642 "name": "pt3", 00:10:02.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:02.642 "is_configured": true, 00:10:02.642 "data_offset": 2048, 00:10:02.642 "data_size": 63488 00:10:02.642 } 00:10:02.642 ] 00:10:02.642 } 00:10:02.642 } 00:10:02.642 }' 00:10:02.642 06:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:02.642 06:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:02.642 pt2 00:10:02.642 pt3' 00:10:02.642 06:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:02.642 06:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:02.642 06:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:02.901 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:02.901 "name": "pt1", 00:10:02.901 "aliases": [ 00:10:02.901 "00000000-0000-0000-0000-000000000001" 00:10:02.901 ], 00:10:02.901 "product_name": "passthru", 00:10:02.901 "block_size": 512, 00:10:02.901 "num_blocks": 65536, 00:10:02.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.901 "assigned_rate_limits": { 00:10:02.901 "rw_ios_per_sec": 0, 00:10:02.901 "rw_mbytes_per_sec": 0, 00:10:02.901 "r_mbytes_per_sec": 0, 00:10:02.901 "w_mbytes_per_sec": 0 00:10:02.901 }, 00:10:02.901 "claimed": true, 00:10:02.901 "claim_type": "exclusive_write", 00:10:02.901 "zoned": false, 00:10:02.901 "supported_io_types": { 00:10:02.901 "read": true, 00:10:02.901 "write": true, 00:10:02.901 "unmap": true, 00:10:02.901 "flush": true, 00:10:02.901 "reset": true, 00:10:02.901 "nvme_admin": false, 00:10:02.901 "nvme_io": false, 00:10:02.901 "nvme_io_md": false, 00:10:02.901 "write_zeroes": true, 00:10:02.901 "zcopy": true, 00:10:02.901 "get_zone_info": false, 00:10:02.901 "zone_management": false, 00:10:02.901 "zone_append": false, 00:10:02.901 "compare": false, 00:10:02.901 "compare_and_write": false, 00:10:02.901 "abort": true, 00:10:02.901 "seek_hole": false, 00:10:02.901 "seek_data": false, 00:10:02.901 "copy": true, 00:10:02.901 "nvme_iov_md": false 00:10:02.901 }, 00:10:02.901 "memory_domains": [ 00:10:02.901 { 00:10:02.901 "dma_device_id": "system", 00:10:02.901 "dma_device_type": 1 00:10:02.901 }, 00:10:02.901 { 00:10:02.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.901 "dma_device_type": 2 00:10:02.901 } 00:10:02.901 ], 00:10:02.901 "driver_specific": { 00:10:02.901 "passthru": { 00:10:02.901 "name": "pt1", 00:10:02.901 "base_bdev_name": "malloc1" 00:10:02.901 } 00:10:02.901 } 00:10:02.901 }' 00:10:02.901 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:02.902 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:02.902 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:02.902 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:03.160 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:03.160 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:03.160 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:03.160 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:03.160 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:03.160 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:03.160 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:03.160 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:03.160 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:03.160 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:03.160 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:03.420 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:03.420 "name": "pt2", 00:10:03.420 "aliases": [ 00:10:03.420 "00000000-0000-0000-0000-000000000002" 00:10:03.420 ], 00:10:03.420 "product_name": "passthru", 00:10:03.420 "block_size": 512, 00:10:03.420 "num_blocks": 65536, 00:10:03.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.420 "assigned_rate_limits": { 00:10:03.420 "rw_ios_per_sec": 0, 00:10:03.420 "rw_mbytes_per_sec": 0, 00:10:03.420 "r_mbytes_per_sec": 0, 00:10:03.420 "w_mbytes_per_sec": 0 00:10:03.420 }, 00:10:03.420 "claimed": true, 00:10:03.420 "claim_type": "exclusive_write", 00:10:03.420 "zoned": false, 00:10:03.420 "supported_io_types": { 00:10:03.420 "read": true, 00:10:03.420 "write": true, 00:10:03.420 "unmap": true, 00:10:03.420 "flush": true, 00:10:03.420 "reset": true, 00:10:03.420 "nvme_admin": false, 00:10:03.420 "nvme_io": false, 00:10:03.420 "nvme_io_md": false, 00:10:03.420 "write_zeroes": true, 00:10:03.420 "zcopy": true, 00:10:03.420 "get_zone_info": false, 00:10:03.420 "zone_management": false, 00:10:03.420 "zone_append": false, 00:10:03.420 "compare": false, 00:10:03.420 "compare_and_write": false, 00:10:03.420 "abort": true, 00:10:03.420 "seek_hole": false, 00:10:03.420 "seek_data": false, 00:10:03.420 "copy": true, 00:10:03.420 "nvme_iov_md": false 00:10:03.420 }, 00:10:03.420 "memory_domains": [ 00:10:03.420 { 00:10:03.420 "dma_device_id": "system", 00:10:03.420 "dma_device_type": 1 00:10:03.420 }, 00:10:03.420 { 00:10:03.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.420 "dma_device_type": 2 00:10:03.420 } 00:10:03.420 ], 00:10:03.420 "driver_specific": { 00:10:03.420 "passthru": { 00:10:03.420 "name": "pt2", 00:10:03.420 "base_bdev_name": "malloc2" 00:10:03.420 } 00:10:03.420 } 00:10:03.420 }' 00:10:03.420 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:03.420 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:03.679 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:03.679 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:03.679 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:03.679 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:03.679 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:03.679 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:03.679 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:03.679 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:03.679 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:03.939 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:03.939 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:03.939 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:10:03.939 06:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:03.939 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:03.939 "name": "pt3", 00:10:03.939 "aliases": [ 00:10:03.939 "00000000-0000-0000-0000-000000000003" 00:10:03.939 ], 00:10:03.939 "product_name": "passthru", 00:10:03.939 "block_size": 512, 00:10:03.939 "num_blocks": 65536, 00:10:03.939 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:03.939 "assigned_rate_limits": { 00:10:03.939 "rw_ios_per_sec": 0, 00:10:03.939 "rw_mbytes_per_sec": 0, 00:10:03.939 "r_mbytes_per_sec": 0, 00:10:03.939 "w_mbytes_per_sec": 0 00:10:03.939 }, 00:10:03.939 "claimed": true, 00:10:03.939 "claim_type": "exclusive_write", 00:10:03.939 "zoned": false, 00:10:03.939 "supported_io_types": { 00:10:03.939 "read": true, 00:10:03.939 "write": true, 00:10:03.939 "unmap": true, 00:10:03.939 "flush": true, 00:10:03.939 "reset": true, 00:10:03.939 "nvme_admin": false, 00:10:03.939 "nvme_io": false, 00:10:03.939 "nvme_io_md": false, 00:10:03.939 "write_zeroes": true, 00:10:03.939 "zcopy": true, 00:10:03.939 "get_zone_info": false, 00:10:03.939 "zone_management": false, 00:10:03.939 "zone_append": false, 00:10:03.939 "compare": false, 00:10:03.939 "compare_and_write": false, 00:10:03.939 "abort": true, 00:10:03.939 "seek_hole": false, 00:10:03.939 "seek_data": false, 00:10:03.939 "copy": true, 00:10:03.939 "nvme_iov_md": false 00:10:03.939 }, 00:10:03.940 "memory_domains": [ 00:10:03.940 { 00:10:03.940 "dma_device_id": "system", 00:10:03.940 "dma_device_type": 1 00:10:03.940 }, 00:10:03.940 { 00:10:03.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.940 "dma_device_type": 2 00:10:03.940 } 00:10:03.940 ], 00:10:03.940 "driver_specific": { 00:10:03.940 "passthru": { 00:10:03.940 "name": "pt3", 00:10:03.940 "base_bdev_name": "malloc3" 00:10:03.940 } 00:10:03.940 } 00:10:03.940 }' 00:10:03.940 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:04.199 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:04.199 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:04.199 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:04.199 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:04.199 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:04.199 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:04.199 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:04.199 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:04.199 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:04.458 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:04.458 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:04.458 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:04.458 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:10:04.717 [2024-08-14 06:41:31.744361] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.717 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=5a38b756-93ca-4791-b86f-ddfa23b90bdb 00:10:04.717 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 5a38b756-93ca-4791-b86f-ddfa23b90bdb ']' 00:10:04.717 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:04.717 [2024-08-14 06:41:31.967686] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:04.717 [2024-08-14 06:41:31.967816] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.717 [2024-08-14 06:41:31.967926] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.717 [2024-08-14 06:41:31.968004] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.717 [2024-08-14 06:41:31.968015] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:04.976 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:04.976 06:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:10:04.976 06:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:10:04.976 06:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:10:04.976 06:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:10:04.976 06:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:05.236 06:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:10:05.236 06:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:05.496 06:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:10:05.496 06:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:05.756 06:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:05.756 06:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:06.015 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:10:06.015 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:06.015 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:10:06.015 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:06.015 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.015 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:10:06.015 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.015 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:10:06.015 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.015 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:10:06.015 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.015 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:06.015 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:06.275 [2024-08-14 06:41:33.325334] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:06.275 [2024-08-14 06:41:33.327200] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:06.275 [2024-08-14 06:41:33.327253] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:06.275 [2024-08-14 06:41:33.327312] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:06.275 [2024-08-14 06:41:33.327368] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:06.275 [2024-08-14 06:41:33.327386] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:06.275 [2024-08-14 06:41:33.327403] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:06.275 [2024-08-14 06:41:33.327412] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:06.275 request: 00:10:06.275 { 00:10:06.275 "name": "raid_bdev1", 00:10:06.275 "raid_level": "raid0", 00:10:06.275 "base_bdevs": [ 00:10:06.275 "malloc1", 00:10:06.275 "malloc2", 00:10:06.275 "malloc3" 00:10:06.275 ], 00:10:06.275 "strip_size_kb": 64, 00:10:06.275 "superblock": false, 00:10:06.275 "method": "bdev_raid_create", 00:10:06.275 "req_id": 1 00:10:06.275 } 00:10:06.275 Got JSON-RPC error response 00:10:06.275 response: 00:10:06.275 { 00:10:06.275 "code": -17, 00:10:06.275 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:06.275 } 00:10:06.275 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:10:06.275 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:10:06.275 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:10:06.275 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:10:06.275 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.275 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:10:06.535 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:10:06.535 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:10:06.535 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:06.535 [2024-08-14 06:41:33.736583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:06.535 [2024-08-14 06:41:33.736671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.535 [2024-08-14 06:41:33.736694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:06.535 [2024-08-14 06:41:33.736704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.535 [2024-08-14 06:41:33.738957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.535 [2024-08-14 06:41:33.738994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:06.535 [2024-08-14 06:41:33.739081] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:06.535 [2024-08-14 06:41:33.739123] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:06.535 pt1 00:10:06.535 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:06.535 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:06.535 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:06.535 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:06.535 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:06.535 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:06.535 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:06.535 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:06.535 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:06.535 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:06.535 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.535 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.794 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:06.794 "name": "raid_bdev1", 00:10:06.794 "uuid": "5a38b756-93ca-4791-b86f-ddfa23b90bdb", 00:10:06.794 "strip_size_kb": 64, 00:10:06.794 "state": "configuring", 00:10:06.794 "raid_level": "raid0", 00:10:06.794 "superblock": true, 00:10:06.794 "num_base_bdevs": 3, 00:10:06.794 "num_base_bdevs_discovered": 1, 00:10:06.794 "num_base_bdevs_operational": 3, 00:10:06.794 "base_bdevs_list": [ 00:10:06.794 { 00:10:06.794 "name": "pt1", 00:10:06.794 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.794 "is_configured": true, 00:10:06.794 "data_offset": 2048, 00:10:06.794 "data_size": 63488 00:10:06.794 }, 00:10:06.794 { 00:10:06.794 "name": null, 00:10:06.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.794 "is_configured": false, 00:10:06.794 "data_offset": 2048, 00:10:06.794 "data_size": 63488 00:10:06.794 }, 00:10:06.795 { 00:10:06.795 "name": null, 00:10:06.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.795 "is_configured": false, 00:10:06.795 "data_offset": 2048, 00:10:06.795 "data_size": 63488 00:10:06.795 } 00:10:06.795 ] 00:10:06.795 }' 00:10:06.795 06:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:06.795 06:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.364 06:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:10:07.364 06:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:07.624 [2024-08-14 06:41:34.718906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:07.624 [2024-08-14 06:41:34.719077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.624 [2024-08-14 06:41:34.719122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:07.624 [2024-08-14 06:41:34.719150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.625 [2024-08-14 06:41:34.719592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.625 [2024-08-14 06:41:34.719650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:07.625 [2024-08-14 06:41:34.719755] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:07.625 [2024-08-14 06:41:34.719804] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:07.625 pt2 00:10:07.625 06:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:07.884 [2024-08-14 06:41:34.938549] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:07.884 06:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:07.884 06:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:07.884 06:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:07.884 06:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:07.884 06:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:07.884 06:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:07.884 06:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:07.884 06:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:07.884 06:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:07.884 06:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:07.884 06:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.884 06:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.144 06:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:08.144 "name": "raid_bdev1", 00:10:08.144 "uuid": "5a38b756-93ca-4791-b86f-ddfa23b90bdb", 00:10:08.144 "strip_size_kb": 64, 00:10:08.144 "state": "configuring", 00:10:08.144 "raid_level": "raid0", 00:10:08.144 "superblock": true, 00:10:08.144 "num_base_bdevs": 3, 00:10:08.144 "num_base_bdevs_discovered": 1, 00:10:08.144 "num_base_bdevs_operational": 3, 00:10:08.144 "base_bdevs_list": [ 00:10:08.144 { 00:10:08.144 "name": "pt1", 00:10:08.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.144 "is_configured": true, 00:10:08.144 "data_offset": 2048, 00:10:08.144 "data_size": 63488 00:10:08.144 }, 00:10:08.144 { 00:10:08.144 "name": null, 00:10:08.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.144 "is_configured": false, 00:10:08.144 "data_offset": 2048, 00:10:08.144 "data_size": 63488 00:10:08.144 }, 00:10:08.144 { 00:10:08.144 "name": null, 00:10:08.144 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:08.144 "is_configured": false, 00:10:08.144 "data_offset": 2048, 00:10:08.144 "data_size": 63488 00:10:08.144 } 00:10:08.144 ] 00:10:08.144 }' 00:10:08.144 06:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:08.144 06:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.711 06:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:10:08.711 06:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:10:08.711 06:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:08.712 [2024-08-14 06:41:35.848969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:08.712 [2024-08-14 06:41:35.849052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.712 [2024-08-14 06:41:35.849071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:08.712 [2024-08-14 06:41:35.849082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.712 [2024-08-14 06:41:35.849556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.712 [2024-08-14 06:41:35.849578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:08.712 [2024-08-14 06:41:35.849656] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:08.712 [2024-08-14 06:41:35.849680] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:08.712 pt2 00:10:08.712 06:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:10:08.712 06:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:10:08.712 06:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:08.972 [2024-08-14 06:41:36.080598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:08.972 [2024-08-14 06:41:36.080757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.972 [2024-08-14 06:41:36.080791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:08.972 [2024-08-14 06:41:36.080841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.972 [2024-08-14 06:41:36.081281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.972 [2024-08-14 06:41:36.081342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:08.972 [2024-08-14 06:41:36.081448] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:08.972 [2024-08-14 06:41:36.081501] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:08.972 [2024-08-14 06:41:36.081634] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:08.972 [2024-08-14 06:41:36.081676] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:08.972 [2024-08-14 06:41:36.081974] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:08.972 [2024-08-14 06:41:36.082161] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:08.972 [2024-08-14 06:41:36.082226] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:08.972 [2024-08-14 06:41:36.082381] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.972 pt3 00:10:08.972 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:10:08.972 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:10:08.972 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:08.972 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:08.972 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:08.972 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:08.972 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:08.972 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:08.972 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:08.972 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:08.972 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:08.972 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:08.972 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.972 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.232 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:09.232 "name": "raid_bdev1", 00:10:09.232 "uuid": "5a38b756-93ca-4791-b86f-ddfa23b90bdb", 00:10:09.232 "strip_size_kb": 64, 00:10:09.232 "state": "online", 00:10:09.232 "raid_level": "raid0", 00:10:09.232 "superblock": true, 00:10:09.232 "num_base_bdevs": 3, 00:10:09.232 "num_base_bdevs_discovered": 3, 00:10:09.232 "num_base_bdevs_operational": 3, 00:10:09.232 "base_bdevs_list": [ 00:10:09.232 { 00:10:09.232 "name": "pt1", 00:10:09.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.232 "is_configured": true, 00:10:09.232 "data_offset": 2048, 00:10:09.232 "data_size": 63488 00:10:09.232 }, 00:10:09.232 { 00:10:09.232 "name": "pt2", 00:10:09.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:09.232 "is_configured": true, 00:10:09.232 "data_offset": 2048, 00:10:09.232 "data_size": 63488 00:10:09.232 }, 00:10:09.232 { 00:10:09.232 "name": "pt3", 00:10:09.232 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:09.232 "is_configured": true, 00:10:09.232 "data_offset": 2048, 00:10:09.232 "data_size": 63488 00:10:09.232 } 00:10:09.232 ] 00:10:09.232 }' 00:10:09.232 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:09.232 06:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.814 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:10:09.814 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:09.814 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:09.814 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:09.814 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:09.814 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:09.814 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:09.814 06:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:09.814 [2024-08-14 06:41:36.987328] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.814 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:09.814 "name": "raid_bdev1", 00:10:09.814 "aliases": [ 00:10:09.814 "5a38b756-93ca-4791-b86f-ddfa23b90bdb" 00:10:09.814 ], 00:10:09.814 "product_name": "Raid Volume", 00:10:09.814 "block_size": 512, 00:10:09.814 "num_blocks": 190464, 00:10:09.814 "uuid": "5a38b756-93ca-4791-b86f-ddfa23b90bdb", 00:10:09.814 "assigned_rate_limits": { 00:10:09.814 "rw_ios_per_sec": 0, 00:10:09.814 "rw_mbytes_per_sec": 0, 00:10:09.814 "r_mbytes_per_sec": 0, 00:10:09.814 "w_mbytes_per_sec": 0 00:10:09.814 }, 00:10:09.814 "claimed": false, 00:10:09.814 "zoned": false, 00:10:09.814 "supported_io_types": { 00:10:09.814 "read": true, 00:10:09.814 "write": true, 00:10:09.814 "unmap": true, 00:10:09.814 "flush": true, 00:10:09.814 "reset": true, 00:10:09.814 "nvme_admin": false, 00:10:09.814 "nvme_io": false, 00:10:09.814 "nvme_io_md": false, 00:10:09.814 "write_zeroes": true, 00:10:09.814 "zcopy": false, 00:10:09.814 "get_zone_info": false, 00:10:09.814 "zone_management": false, 00:10:09.814 "zone_append": false, 00:10:09.814 "compare": false, 00:10:09.814 "compare_and_write": false, 00:10:09.814 "abort": false, 00:10:09.814 "seek_hole": false, 00:10:09.814 "seek_data": false, 00:10:09.814 "copy": false, 00:10:09.814 "nvme_iov_md": false 00:10:09.814 }, 00:10:09.814 "memory_domains": [ 00:10:09.814 { 00:10:09.814 "dma_device_id": "system", 00:10:09.814 "dma_device_type": 1 00:10:09.814 }, 00:10:09.814 { 00:10:09.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.814 "dma_device_type": 2 00:10:09.814 }, 00:10:09.814 { 00:10:09.814 "dma_device_id": "system", 00:10:09.814 "dma_device_type": 1 00:10:09.814 }, 00:10:09.814 { 00:10:09.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.814 "dma_device_type": 2 00:10:09.814 }, 00:10:09.814 { 00:10:09.814 "dma_device_id": "system", 00:10:09.814 "dma_device_type": 1 00:10:09.814 }, 00:10:09.814 { 00:10:09.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.814 "dma_device_type": 2 00:10:09.814 } 00:10:09.814 ], 00:10:09.814 "driver_specific": { 00:10:09.814 "raid": { 00:10:09.814 "uuid": "5a38b756-93ca-4791-b86f-ddfa23b90bdb", 00:10:09.814 "strip_size_kb": 64, 00:10:09.814 "state": "online", 00:10:09.814 "raid_level": "raid0", 00:10:09.814 "superblock": true, 00:10:09.814 "num_base_bdevs": 3, 00:10:09.814 "num_base_bdevs_discovered": 3, 00:10:09.814 "num_base_bdevs_operational": 3, 00:10:09.814 "base_bdevs_list": [ 00:10:09.814 { 00:10:09.814 "name": "pt1", 00:10:09.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.814 "is_configured": true, 00:10:09.814 "data_offset": 2048, 00:10:09.814 "data_size": 63488 00:10:09.814 }, 00:10:09.814 { 00:10:09.814 "name": "pt2", 00:10:09.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:09.814 "is_configured": true, 00:10:09.814 "data_offset": 2048, 00:10:09.814 "data_size": 63488 00:10:09.814 }, 00:10:09.814 { 00:10:09.814 "name": "pt3", 00:10:09.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:09.814 "is_configured": true, 00:10:09.814 "data_offset": 2048, 00:10:09.814 "data_size": 63488 00:10:09.814 } 00:10:09.814 ] 00:10:09.814 } 00:10:09.814 } 00:10:09.814 }' 00:10:09.814 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.814 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:09.814 pt2 00:10:09.814 pt3' 00:10:09.814 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:09.814 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:09.814 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:10.075 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:10.075 "name": "pt1", 00:10:10.075 "aliases": [ 00:10:10.075 "00000000-0000-0000-0000-000000000001" 00:10:10.075 ], 00:10:10.075 "product_name": "passthru", 00:10:10.075 "block_size": 512, 00:10:10.075 "num_blocks": 65536, 00:10:10.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.075 "assigned_rate_limits": { 00:10:10.075 "rw_ios_per_sec": 0, 00:10:10.075 "rw_mbytes_per_sec": 0, 00:10:10.075 "r_mbytes_per_sec": 0, 00:10:10.075 "w_mbytes_per_sec": 0 00:10:10.075 }, 00:10:10.075 "claimed": true, 00:10:10.075 "claim_type": "exclusive_write", 00:10:10.075 "zoned": false, 00:10:10.075 "supported_io_types": { 00:10:10.075 "read": true, 00:10:10.075 "write": true, 00:10:10.075 "unmap": true, 00:10:10.075 "flush": true, 00:10:10.075 "reset": true, 00:10:10.075 "nvme_admin": false, 00:10:10.075 "nvme_io": false, 00:10:10.075 "nvme_io_md": false, 00:10:10.075 "write_zeroes": true, 00:10:10.075 "zcopy": true, 00:10:10.075 "get_zone_info": false, 00:10:10.075 "zone_management": false, 00:10:10.075 "zone_append": false, 00:10:10.075 "compare": false, 00:10:10.075 "compare_and_write": false, 00:10:10.075 "abort": true, 00:10:10.075 "seek_hole": false, 00:10:10.075 "seek_data": false, 00:10:10.075 "copy": true, 00:10:10.075 "nvme_iov_md": false 00:10:10.075 }, 00:10:10.075 "memory_domains": [ 00:10:10.075 { 00:10:10.075 "dma_device_id": "system", 00:10:10.075 "dma_device_type": 1 00:10:10.075 }, 00:10:10.075 { 00:10:10.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.075 "dma_device_type": 2 00:10:10.075 } 00:10:10.075 ], 00:10:10.075 "driver_specific": { 00:10:10.075 "passthru": { 00:10:10.075 "name": "pt1", 00:10:10.075 "base_bdev_name": "malloc1" 00:10:10.075 } 00:10:10.075 } 00:10:10.075 }' 00:10:10.075 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:10.075 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:10.075 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:10.075 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:10.335 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:10.335 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:10.335 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:10.335 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:10.335 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:10.335 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:10.335 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:10.335 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:10.335 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:10.335 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:10.335 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:10.594 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:10.594 "name": "pt2", 00:10:10.594 "aliases": [ 00:10:10.594 "00000000-0000-0000-0000-000000000002" 00:10:10.594 ], 00:10:10.594 "product_name": "passthru", 00:10:10.594 "block_size": 512, 00:10:10.594 "num_blocks": 65536, 00:10:10.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.594 "assigned_rate_limits": { 00:10:10.594 "rw_ios_per_sec": 0, 00:10:10.594 "rw_mbytes_per_sec": 0, 00:10:10.594 "r_mbytes_per_sec": 0, 00:10:10.594 "w_mbytes_per_sec": 0 00:10:10.594 }, 00:10:10.594 "claimed": true, 00:10:10.594 "claim_type": "exclusive_write", 00:10:10.594 "zoned": false, 00:10:10.594 "supported_io_types": { 00:10:10.594 "read": true, 00:10:10.594 "write": true, 00:10:10.594 "unmap": true, 00:10:10.594 "flush": true, 00:10:10.594 "reset": true, 00:10:10.594 "nvme_admin": false, 00:10:10.594 "nvme_io": false, 00:10:10.594 "nvme_io_md": false, 00:10:10.594 "write_zeroes": true, 00:10:10.594 "zcopy": true, 00:10:10.594 "get_zone_info": false, 00:10:10.594 "zone_management": false, 00:10:10.594 "zone_append": false, 00:10:10.594 "compare": false, 00:10:10.594 "compare_and_write": false, 00:10:10.594 "abort": true, 00:10:10.594 "seek_hole": false, 00:10:10.594 "seek_data": false, 00:10:10.594 "copy": true, 00:10:10.594 "nvme_iov_md": false 00:10:10.594 }, 00:10:10.594 "memory_domains": [ 00:10:10.594 { 00:10:10.594 "dma_device_id": "system", 00:10:10.594 "dma_device_type": 1 00:10:10.594 }, 00:10:10.594 { 00:10:10.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.594 "dma_device_type": 2 00:10:10.594 } 00:10:10.594 ], 00:10:10.594 "driver_specific": { 00:10:10.594 "passthru": { 00:10:10.594 "name": "pt2", 00:10:10.594 "base_bdev_name": "malloc2" 00:10:10.594 } 00:10:10.594 } 00:10:10.594 }' 00:10:10.594 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:10.594 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:10.854 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:10.854 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:10.854 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:10.854 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:10.854 06:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:10.854 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:10.854 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:10.854 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:10.854 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:11.114 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:11.114 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:11.114 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:10:11.114 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:11.114 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:11.114 "name": "pt3", 00:10:11.114 "aliases": [ 00:10:11.114 "00000000-0000-0000-0000-000000000003" 00:10:11.114 ], 00:10:11.114 "product_name": "passthru", 00:10:11.114 "block_size": 512, 00:10:11.114 "num_blocks": 65536, 00:10:11.114 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.114 "assigned_rate_limits": { 00:10:11.114 "rw_ios_per_sec": 0, 00:10:11.114 "rw_mbytes_per_sec": 0, 00:10:11.114 "r_mbytes_per_sec": 0, 00:10:11.114 "w_mbytes_per_sec": 0 00:10:11.114 }, 00:10:11.114 "claimed": true, 00:10:11.114 "claim_type": "exclusive_write", 00:10:11.114 "zoned": false, 00:10:11.114 "supported_io_types": { 00:10:11.114 "read": true, 00:10:11.114 "write": true, 00:10:11.114 "unmap": true, 00:10:11.114 "flush": true, 00:10:11.114 "reset": true, 00:10:11.114 "nvme_admin": false, 00:10:11.114 "nvme_io": false, 00:10:11.114 "nvme_io_md": false, 00:10:11.114 "write_zeroes": true, 00:10:11.114 "zcopy": true, 00:10:11.114 "get_zone_info": false, 00:10:11.114 "zone_management": false, 00:10:11.114 "zone_append": false, 00:10:11.114 "compare": false, 00:10:11.114 "compare_and_write": false, 00:10:11.114 "abort": true, 00:10:11.114 "seek_hole": false, 00:10:11.114 "seek_data": false, 00:10:11.114 "copy": true, 00:10:11.114 "nvme_iov_md": false 00:10:11.114 }, 00:10:11.114 "memory_domains": [ 00:10:11.114 { 00:10:11.114 "dma_device_id": "system", 00:10:11.114 "dma_device_type": 1 00:10:11.114 }, 00:10:11.114 { 00:10:11.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.114 "dma_device_type": 2 00:10:11.114 } 00:10:11.114 ], 00:10:11.114 "driver_specific": { 00:10:11.114 "passthru": { 00:10:11.114 "name": "pt3", 00:10:11.114 "base_bdev_name": "malloc3" 00:10:11.114 } 00:10:11.114 } 00:10:11.114 }' 00:10:11.114 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:11.114 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:11.373 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:11.373 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:11.373 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:11.373 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:11.373 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:11.373 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:11.373 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:11.373 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:11.631 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:11.631 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:11.631 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:11.631 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:10:11.631 [2024-08-14 06:41:38.868120] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.889 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 5a38b756-93ca-4791-b86f-ddfa23b90bdb '!=' 5a38b756-93ca-4791-b86f-ddfa23b90bdb ']' 00:10:11.889 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:10:11.889 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:11.889 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:11.889 06:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 77244 00:10:11.889 06:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 77244 ']' 00:10:11.889 06:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 77244 00:10:11.889 06:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:10:11.889 06:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:11.889 06:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77244 00:10:11.889 06:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:11.889 06:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:11.889 06:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77244' 00:10:11.889 killing process with pid 77244 00:10:11.889 06:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 77244 00:10:11.889 [2024-08-14 06:41:38.916138] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.889 [2024-08-14 06:41:38.916322] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.889 06:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 77244 00:10:11.889 [2024-08-14 06:41:38.916421] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.889 [2024-08-14 06:41:38.916440] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:10:11.889 [2024-08-14 06:41:38.950069] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.148 06:41:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:10:12.148 00:10:12.148 real 0m12.911s 00:10:12.148 user 0m23.515s 00:10:12.148 sys 0m1.895s 00:10:12.148 06:41:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:12.148 06:41:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.148 ************************************ 00:10:12.148 END TEST raid_superblock_test 00:10:12.148 ************************************ 00:10:12.148 06:41:39 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:12.148 06:41:39 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:12.148 06:41:39 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:12.148 06:41:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.148 ************************************ 00:10:12.148 START TEST raid_read_error_test 00:10:12.148 ************************************ 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 3 read 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.iop6XvKmDm 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=77690 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 77690 /var/tmp/spdk-raid.sock 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 77690 ']' 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:12.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:12.148 06:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.148 [2024-08-14 06:41:39.350489] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:10:12.148 [2024-08-14 06:41:39.351077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77690 ] 00:10:12.406 [2024-08-14 06:41:39.499138] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.406 [2024-08-14 06:41:39.559294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.406 [2024-08-14 06:41:39.601773] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.406 [2024-08-14 06:41:39.601900] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.972 06:41:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:12.972 06:41:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:10:12.972 06:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:10:12.972 06:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:13.231 BaseBdev1_malloc 00:10:13.231 06:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:13.489 true 00:10:13.489 06:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:13.748 [2024-08-14 06:41:40.865598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:13.748 [2024-08-14 06:41:40.865757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.748 [2024-08-14 06:41:40.865804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:13.748 [2024-08-14 06:41:40.865848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.748 [2024-08-14 06:41:40.868119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.748 [2024-08-14 06:41:40.868223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:13.748 BaseBdev1 00:10:13.748 06:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:10:13.748 06:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:14.007 BaseBdev2_malloc 00:10:14.007 06:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:14.266 true 00:10:14.266 06:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:14.266 [2024-08-14 06:41:41.497215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:14.266 [2024-08-14 06:41:41.497395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.266 [2024-08-14 06:41:41.497445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:14.266 [2024-08-14 06:41:41.497487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.266 [2024-08-14 06:41:41.499779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.266 [2024-08-14 06:41:41.499860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:14.266 BaseBdev2 00:10:14.266 06:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:10:14.266 06:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:14.523 BaseBdev3_malloc 00:10:14.523 06:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:10:14.782 true 00:10:14.782 06:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:15.041 [2024-08-14 06:41:42.153820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:15.041 [2024-08-14 06:41:42.153976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.041 [2024-08-14 06:41:42.154018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:15.041 [2024-08-14 06:41:42.154070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.041 [2024-08-14 06:41:42.156281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.041 [2024-08-14 06:41:42.156361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:15.041 BaseBdev3 00:10:15.041 06:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:10:15.299 [2024-08-14 06:41:42.373545] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.299 [2024-08-14 06:41:42.375643] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.299 [2024-08-14 06:41:42.375723] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.299 [2024-08-14 06:41:42.375928] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:15.299 [2024-08-14 06:41:42.375940] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:15.299 [2024-08-14 06:41:42.376293] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:15.299 [2024-08-14 06:41:42.376459] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:15.299 [2024-08-14 06:41:42.376482] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:10:15.299 [2024-08-14 06:41:42.376649] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.299 06:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:15.299 06:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:15.299 06:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:15.299 06:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:15.299 06:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:15.299 06:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:15.299 06:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:15.299 06:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:15.299 06:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:15.299 06:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:15.299 06:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:15.299 06:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.558 06:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:15.558 "name": "raid_bdev1", 00:10:15.558 "uuid": "15a5a467-ccc6-46b5-a1c5-a8278d250e03", 00:10:15.558 "strip_size_kb": 64, 00:10:15.558 "state": "online", 00:10:15.558 "raid_level": "raid0", 00:10:15.558 "superblock": true, 00:10:15.558 "num_base_bdevs": 3, 00:10:15.558 "num_base_bdevs_discovered": 3, 00:10:15.558 "num_base_bdevs_operational": 3, 00:10:15.558 "base_bdevs_list": [ 00:10:15.558 { 00:10:15.558 "name": "BaseBdev1", 00:10:15.558 "uuid": "3ace0d9f-6aa9-5a29-ad7c-ed9cc9e71a3d", 00:10:15.558 "is_configured": true, 00:10:15.558 "data_offset": 2048, 00:10:15.558 "data_size": 63488 00:10:15.558 }, 00:10:15.558 { 00:10:15.558 "name": "BaseBdev2", 00:10:15.558 "uuid": "d928a9db-1bd9-5466-a924-f54fcad826c4", 00:10:15.558 "is_configured": true, 00:10:15.558 "data_offset": 2048, 00:10:15.558 "data_size": 63488 00:10:15.558 }, 00:10:15.558 { 00:10:15.558 "name": "BaseBdev3", 00:10:15.558 "uuid": "6ea0e2f4-3514-55cd-9338-eca292fac95f", 00:10:15.558 "is_configured": true, 00:10:15.558 "data_offset": 2048, 00:10:15.558 "data_size": 63488 00:10:15.558 } 00:10:15.558 ] 00:10:15.558 }' 00:10:15.558 06:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:15.558 06:41:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.131 06:41:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:16.131 06:41:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:10:16.131 [2024-08-14 06:41:43.228414] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:17.137 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.395 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:17.395 "name": "raid_bdev1", 00:10:17.395 "uuid": "15a5a467-ccc6-46b5-a1c5-a8278d250e03", 00:10:17.395 "strip_size_kb": 64, 00:10:17.395 "state": "online", 00:10:17.395 "raid_level": "raid0", 00:10:17.395 "superblock": true, 00:10:17.395 "num_base_bdevs": 3, 00:10:17.395 "num_base_bdevs_discovered": 3, 00:10:17.395 "num_base_bdevs_operational": 3, 00:10:17.395 "base_bdevs_list": [ 00:10:17.395 { 00:10:17.395 "name": "BaseBdev1", 00:10:17.395 "uuid": "3ace0d9f-6aa9-5a29-ad7c-ed9cc9e71a3d", 00:10:17.395 "is_configured": true, 00:10:17.395 "data_offset": 2048, 00:10:17.395 "data_size": 63488 00:10:17.395 }, 00:10:17.395 { 00:10:17.395 "name": "BaseBdev2", 00:10:17.395 "uuid": "d928a9db-1bd9-5466-a924-f54fcad826c4", 00:10:17.395 "is_configured": true, 00:10:17.395 "data_offset": 2048, 00:10:17.395 "data_size": 63488 00:10:17.395 }, 00:10:17.395 { 00:10:17.395 "name": "BaseBdev3", 00:10:17.395 "uuid": "6ea0e2f4-3514-55cd-9338-eca292fac95f", 00:10:17.395 "is_configured": true, 00:10:17.395 "data_offset": 2048, 00:10:17.395 "data_size": 63488 00:10:17.395 } 00:10:17.395 ] 00:10:17.395 }' 00:10:17.395 06:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:17.395 06:41:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.962 06:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:18.221 [2024-08-14 06:41:45.263669] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.221 [2024-08-14 06:41:45.263791] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.221 [2024-08-14 06:41:45.266232] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.221 [2024-08-14 06:41:45.266316] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.221 [2024-08-14 06:41:45.266372] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.221 [2024-08-14 06:41:45.266431] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:10:18.221 0 00:10:18.221 06:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 77690 00:10:18.221 06:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 77690 ']' 00:10:18.221 06:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 77690 00:10:18.221 06:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:10:18.221 06:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:18.221 06:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77690 00:10:18.221 killing process with pid 77690 00:10:18.221 06:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:18.221 06:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:18.221 06:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77690' 00:10:18.221 06:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 77690 00:10:18.221 [2024-08-14 06:41:45.313724] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.221 06:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 77690 00:10:18.221 [2024-08-14 06:41:45.340130] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:18.479 06:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:10:18.479 06:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.iop6XvKmDm 00:10:18.479 06:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:10:18.479 ************************************ 00:10:18.479 END TEST raid_read_error_test 00:10:18.479 ************************************ 00:10:18.479 06:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.49 00:10:18.479 06:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:10:18.479 06:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:18.479 06:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:18.479 06:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.49 != \0\.\0\0 ]] 00:10:18.479 00:10:18.479 real 0m6.326s 00:10:18.479 user 0m10.023s 00:10:18.479 sys 0m0.861s 00:10:18.479 06:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:18.479 06:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.479 06:41:45 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:18.479 06:41:45 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:18.479 06:41:45 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:18.479 06:41:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:18.479 ************************************ 00:10:18.479 START TEST raid_write_error_test 00:10:18.479 ************************************ 00:10:18.479 06:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 3 write 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.QO5dPFnNfx 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=77859 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 77859 /var/tmp/spdk-raid.sock 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 77859 ']' 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:18.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:18.480 06:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.738 [2024-08-14 06:41:45.746681] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:10:18.738 [2024-08-14 06:41:45.746829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77859 ] 00:10:18.738 [2024-08-14 06:41:45.884477] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.738 [2024-08-14 06:41:45.934074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.738 [2024-08-14 06:41:45.976852] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.738 [2024-08-14 06:41:45.976977] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.672 06:41:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:19.672 06:41:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:10:19.672 06:41:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:10:19.672 06:41:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:19.672 BaseBdev1_malloc 00:10:19.672 06:41:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:19.929 true 00:10:19.929 06:41:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:20.187 [2024-08-14 06:41:47.248961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:20.187 [2024-08-14 06:41:47.249134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.187 [2024-08-14 06:41:47.249199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:20.187 [2024-08-14 06:41:47.249274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.187 [2024-08-14 06:41:47.251748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.187 [2024-08-14 06:41:47.251843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:20.187 BaseBdev1 00:10:20.187 06:41:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:10:20.187 06:41:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:20.446 BaseBdev2_malloc 00:10:20.446 06:41:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:20.446 true 00:10:20.706 06:41:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:20.706 [2024-08-14 06:41:47.924829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:20.706 [2024-08-14 06:41:47.924971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.706 [2024-08-14 06:41:47.925029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:20.706 [2024-08-14 06:41:47.925067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.706 [2024-08-14 06:41:47.927339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.706 [2024-08-14 06:41:47.927427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:20.706 BaseBdev2 00:10:20.706 06:41:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:10:20.706 06:41:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:20.965 BaseBdev3_malloc 00:10:20.965 06:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:10:21.224 true 00:10:21.224 06:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:21.482 [2024-08-14 06:41:48.616595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:21.482 [2024-08-14 06:41:48.616677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.482 [2024-08-14 06:41:48.616701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:21.482 [2024-08-14 06:41:48.616712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.482 [2024-08-14 06:41:48.618881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.482 [2024-08-14 06:41:48.618923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:21.482 BaseBdev3 00:10:21.482 06:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:10:21.741 [2024-08-14 06:41:48.824310] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.741 [2024-08-14 06:41:48.826306] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.741 [2024-08-14 06:41:48.826425] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.741 [2024-08-14 06:41:48.826652] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:21.741 [2024-08-14 06:41:48.826704] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:21.741 [2024-08-14 06:41:48.827033] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:21.741 [2024-08-14 06:41:48.827231] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:21.741 [2024-08-14 06:41:48.827284] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:10:21.741 [2024-08-14 06:41:48.827479] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.741 06:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:21.741 06:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:21.741 06:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:21.741 06:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:21.741 06:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:21.741 06:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:21.741 06:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:21.741 06:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:21.741 06:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:21.741 06:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:21.741 06:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:21.741 06:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.000 06:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:22.000 "name": "raid_bdev1", 00:10:22.000 "uuid": "7c3e3ff3-9e90-4c35-9e87-9e7ad3be1d26", 00:10:22.000 "strip_size_kb": 64, 00:10:22.000 "state": "online", 00:10:22.000 "raid_level": "raid0", 00:10:22.000 "superblock": true, 00:10:22.000 "num_base_bdevs": 3, 00:10:22.000 "num_base_bdevs_discovered": 3, 00:10:22.000 "num_base_bdevs_operational": 3, 00:10:22.000 "base_bdevs_list": [ 00:10:22.000 { 00:10:22.000 "name": "BaseBdev1", 00:10:22.000 "uuid": "028b9b7f-5be2-5014-a1dc-14d18b6a33ec", 00:10:22.000 "is_configured": true, 00:10:22.001 "data_offset": 2048, 00:10:22.001 "data_size": 63488 00:10:22.001 }, 00:10:22.001 { 00:10:22.001 "name": "BaseBdev2", 00:10:22.001 "uuid": "3c34d4bb-6af3-558b-9d15-bca45485be5b", 00:10:22.001 "is_configured": true, 00:10:22.001 "data_offset": 2048, 00:10:22.001 "data_size": 63488 00:10:22.001 }, 00:10:22.001 { 00:10:22.001 "name": "BaseBdev3", 00:10:22.001 "uuid": "1852c5c0-144f-5c8e-a5f3-d1b7fcf8904f", 00:10:22.001 "is_configured": true, 00:10:22.001 "data_offset": 2048, 00:10:22.001 "data_size": 63488 00:10:22.001 } 00:10:22.001 ] 00:10:22.001 }' 00:10:22.001 06:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:22.001 06:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.568 06:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:10:22.568 06:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:22.568 [2024-08-14 06:41:49.663145] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:23.503 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:23.760 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:10:23.760 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:23.760 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:10:23.760 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:23.760 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:23.760 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:23.760 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:23.760 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:23.760 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:23.760 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:23.760 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:23.760 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:23.760 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:23.760 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.760 06:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.018 06:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:24.018 "name": "raid_bdev1", 00:10:24.018 "uuid": "7c3e3ff3-9e90-4c35-9e87-9e7ad3be1d26", 00:10:24.018 "strip_size_kb": 64, 00:10:24.018 "state": "online", 00:10:24.018 "raid_level": "raid0", 00:10:24.018 "superblock": true, 00:10:24.018 "num_base_bdevs": 3, 00:10:24.018 "num_base_bdevs_discovered": 3, 00:10:24.018 "num_base_bdevs_operational": 3, 00:10:24.018 "base_bdevs_list": [ 00:10:24.018 { 00:10:24.019 "name": "BaseBdev1", 00:10:24.019 "uuid": "028b9b7f-5be2-5014-a1dc-14d18b6a33ec", 00:10:24.019 "is_configured": true, 00:10:24.019 "data_offset": 2048, 00:10:24.019 "data_size": 63488 00:10:24.019 }, 00:10:24.019 { 00:10:24.019 "name": "BaseBdev2", 00:10:24.019 "uuid": "3c34d4bb-6af3-558b-9d15-bca45485be5b", 00:10:24.019 "is_configured": true, 00:10:24.019 "data_offset": 2048, 00:10:24.019 "data_size": 63488 00:10:24.019 }, 00:10:24.019 { 00:10:24.019 "name": "BaseBdev3", 00:10:24.019 "uuid": "1852c5c0-144f-5c8e-a5f3-d1b7fcf8904f", 00:10:24.019 "is_configured": true, 00:10:24.019 "data_offset": 2048, 00:10:24.019 "data_size": 63488 00:10:24.019 } 00:10:24.019 ] 00:10:24.019 }' 00:10:24.019 06:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:24.019 06:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.585 06:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:24.844 [2024-08-14 06:41:51.843767] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.844 [2024-08-14 06:41:51.843807] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.844 0 00:10:24.844 [2024-08-14 06:41:51.846448] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.844 [2024-08-14 06:41:51.846508] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.844 [2024-08-14 06:41:51.846544] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.844 [2024-08-14 06:41:51.846553] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:10:24.844 06:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 77859 00:10:24.844 06:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 77859 ']' 00:10:24.844 06:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 77859 00:10:24.844 06:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:10:24.844 06:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:24.844 06:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77859 00:10:24.844 killing process with pid 77859 00:10:24.844 06:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:24.844 06:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:24.844 06:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77859' 00:10:24.844 06:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 77859 00:10:24.844 [2024-08-14 06:41:51.906925] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.844 06:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 77859 00:10:24.844 [2024-08-14 06:41:51.933333] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.103 06:41:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:10:25.103 06:41:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.QO5dPFnNfx 00:10:25.103 06:41:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:10:25.103 06:41:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:10:25.103 06:41:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:10:25.103 06:41:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:25.103 06:41:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:25.103 06:41:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:10:25.103 00:10:25.103 real 0m6.526s 00:10:25.103 user 0m10.368s 00:10:25.103 sys 0m0.904s 00:10:25.103 06:41:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:25.103 ************************************ 00:10:25.103 06:41:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.103 END TEST raid_write_error_test 00:10:25.103 ************************************ 00:10:25.103 06:41:52 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:10:25.103 06:41:52 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:25.103 06:41:52 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:25.103 06:41:52 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:25.103 06:41:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.103 ************************************ 00:10:25.103 START TEST raid_state_function_test 00:10:25.103 ************************************ 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 false 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:25.103 Process raid pid: 78041 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=78041 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 78041' 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 78041 /var/tmp/spdk-raid.sock 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 78041 ']' 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:25.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:25.103 06:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.103 [2024-08-14 06:41:52.335699] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:10:25.103 [2024-08-14 06:41:52.335938] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.361 [2024-08-14 06:41:52.484022] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.361 [2024-08-14 06:41:52.532345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.361 [2024-08-14 06:41:52.575498] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.361 [2024-08-14 06:41:52.575612] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.928 06:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:25.929 06:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:10:25.929 06:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:26.251 [2024-08-14 06:41:53.347524] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.251 [2024-08-14 06:41:53.347651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.251 [2024-08-14 06:41:53.347695] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.251 [2024-08-14 06:41:53.347721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.251 [2024-08-14 06:41:53.347752] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.251 [2024-08-14 06:41:53.347781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.251 06:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:26.251 06:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:26.251 06:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:26.251 06:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:26.251 06:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:26.251 06:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:26.251 06:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:26.251 06:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:26.251 06:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:26.251 06:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:26.251 06:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:26.251 06:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.509 06:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:26.509 "name": "Existed_Raid", 00:10:26.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.509 "strip_size_kb": 64, 00:10:26.509 "state": "configuring", 00:10:26.509 "raid_level": "concat", 00:10:26.509 "superblock": false, 00:10:26.509 "num_base_bdevs": 3, 00:10:26.509 "num_base_bdevs_discovered": 0, 00:10:26.509 "num_base_bdevs_operational": 3, 00:10:26.509 "base_bdevs_list": [ 00:10:26.509 { 00:10:26.509 "name": "BaseBdev1", 00:10:26.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.509 "is_configured": false, 00:10:26.509 "data_offset": 0, 00:10:26.509 "data_size": 0 00:10:26.509 }, 00:10:26.509 { 00:10:26.509 "name": "BaseBdev2", 00:10:26.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.509 "is_configured": false, 00:10:26.509 "data_offset": 0, 00:10:26.509 "data_size": 0 00:10:26.509 }, 00:10:26.509 { 00:10:26.509 "name": "BaseBdev3", 00:10:26.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.509 "is_configured": false, 00:10:26.509 "data_offset": 0, 00:10:26.509 "data_size": 0 00:10:26.509 } 00:10:26.509 ] 00:10:26.509 }' 00:10:26.509 06:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:26.509 06:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.073 06:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:27.073 [2024-08-14 06:41:54.321726] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.073 [2024-08-14 06:41:54.321861] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:27.331 06:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:27.331 [2024-08-14 06:41:54.537373] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.331 [2024-08-14 06:41:54.537518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.331 [2024-08-14 06:41:54.537551] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.331 [2024-08-14 06:41:54.537576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.331 [2024-08-14 06:41:54.537600] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.331 [2024-08-14 06:41:54.537632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.331 06:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:27.589 [2024-08-14 06:41:54.766363] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.589 BaseBdev1 00:10:27.589 06:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:27.589 06:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:10:27.589 06:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:27.589 06:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:27.589 06:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:27.589 06:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:27.589 06:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:27.847 06:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:28.106 [ 00:10:28.106 { 00:10:28.106 "name": "BaseBdev1", 00:10:28.106 "aliases": [ 00:10:28.106 "1e69bb74-3b15-417a-8be7-1c08f079f26d" 00:10:28.106 ], 00:10:28.106 "product_name": "Malloc disk", 00:10:28.106 "block_size": 512, 00:10:28.106 "num_blocks": 65536, 00:10:28.106 "uuid": "1e69bb74-3b15-417a-8be7-1c08f079f26d", 00:10:28.106 "assigned_rate_limits": { 00:10:28.106 "rw_ios_per_sec": 0, 00:10:28.106 "rw_mbytes_per_sec": 0, 00:10:28.106 "r_mbytes_per_sec": 0, 00:10:28.106 "w_mbytes_per_sec": 0 00:10:28.106 }, 00:10:28.106 "claimed": true, 00:10:28.106 "claim_type": "exclusive_write", 00:10:28.106 "zoned": false, 00:10:28.106 "supported_io_types": { 00:10:28.106 "read": true, 00:10:28.106 "write": true, 00:10:28.106 "unmap": true, 00:10:28.106 "flush": true, 00:10:28.106 "reset": true, 00:10:28.106 "nvme_admin": false, 00:10:28.106 "nvme_io": false, 00:10:28.106 "nvme_io_md": false, 00:10:28.106 "write_zeroes": true, 00:10:28.106 "zcopy": true, 00:10:28.106 "get_zone_info": false, 00:10:28.106 "zone_management": false, 00:10:28.106 "zone_append": false, 00:10:28.106 "compare": false, 00:10:28.106 "compare_and_write": false, 00:10:28.106 "abort": true, 00:10:28.106 "seek_hole": false, 00:10:28.106 "seek_data": false, 00:10:28.106 "copy": true, 00:10:28.106 "nvme_iov_md": false 00:10:28.106 }, 00:10:28.106 "memory_domains": [ 00:10:28.106 { 00:10:28.106 "dma_device_id": "system", 00:10:28.106 "dma_device_type": 1 00:10:28.106 }, 00:10:28.106 { 00:10:28.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.106 "dma_device_type": 2 00:10:28.106 } 00:10:28.106 ], 00:10:28.106 "driver_specific": {} 00:10:28.106 } 00:10:28.106 ] 00:10:28.106 06:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:28.106 06:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:28.106 06:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:28.106 06:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:28.106 06:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:28.106 06:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:28.106 06:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:28.106 06:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:28.106 06:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:28.106 06:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:28.106 06:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:28.106 06:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.106 06:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.364 06:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:28.364 "name": "Existed_Raid", 00:10:28.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.364 "strip_size_kb": 64, 00:10:28.364 "state": "configuring", 00:10:28.364 "raid_level": "concat", 00:10:28.364 "superblock": false, 00:10:28.364 "num_base_bdevs": 3, 00:10:28.364 "num_base_bdevs_discovered": 1, 00:10:28.364 "num_base_bdevs_operational": 3, 00:10:28.364 "base_bdevs_list": [ 00:10:28.364 { 00:10:28.364 "name": "BaseBdev1", 00:10:28.364 "uuid": "1e69bb74-3b15-417a-8be7-1c08f079f26d", 00:10:28.364 "is_configured": true, 00:10:28.364 "data_offset": 0, 00:10:28.364 "data_size": 65536 00:10:28.364 }, 00:10:28.364 { 00:10:28.364 "name": "BaseBdev2", 00:10:28.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.364 "is_configured": false, 00:10:28.364 "data_offset": 0, 00:10:28.364 "data_size": 0 00:10:28.364 }, 00:10:28.364 { 00:10:28.364 "name": "BaseBdev3", 00:10:28.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.364 "is_configured": false, 00:10:28.364 "data_offset": 0, 00:10:28.364 "data_size": 0 00:10:28.364 } 00:10:28.364 ] 00:10:28.364 }' 00:10:28.364 06:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:28.364 06:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.929 06:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:28.929 [2024-08-14 06:41:56.160143] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:28.929 [2024-08-14 06:41:56.160230] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:28.929 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:29.188 [2024-08-14 06:41:56.359867] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.188 [2024-08-14 06:41:56.361709] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:29.188 [2024-08-14 06:41:56.361815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:29.188 [2024-08-14 06:41:56.361832] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:29.188 [2024-08-14 06:41:56.361841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:29.188 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:29.188 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:29.188 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:29.188 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:29.188 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:29.188 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:29.188 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:29.188 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:29.188 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:29.188 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:29.188 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:29.188 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:29.188 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.188 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.446 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:29.446 "name": "Existed_Raid", 00:10:29.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.446 "strip_size_kb": 64, 00:10:29.446 "state": "configuring", 00:10:29.447 "raid_level": "concat", 00:10:29.447 "superblock": false, 00:10:29.447 "num_base_bdevs": 3, 00:10:29.447 "num_base_bdevs_discovered": 1, 00:10:29.447 "num_base_bdevs_operational": 3, 00:10:29.447 "base_bdevs_list": [ 00:10:29.447 { 00:10:29.447 "name": "BaseBdev1", 00:10:29.447 "uuid": "1e69bb74-3b15-417a-8be7-1c08f079f26d", 00:10:29.447 "is_configured": true, 00:10:29.447 "data_offset": 0, 00:10:29.447 "data_size": 65536 00:10:29.447 }, 00:10:29.447 { 00:10:29.447 "name": "BaseBdev2", 00:10:29.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.447 "is_configured": false, 00:10:29.447 "data_offset": 0, 00:10:29.447 "data_size": 0 00:10:29.447 }, 00:10:29.447 { 00:10:29.447 "name": "BaseBdev3", 00:10:29.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.447 "is_configured": false, 00:10:29.447 "data_offset": 0, 00:10:29.447 "data_size": 0 00:10:29.447 } 00:10:29.447 ] 00:10:29.447 }' 00:10:29.447 06:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:29.447 06:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.014 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:30.273 [2024-08-14 06:41:57.397814] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.273 BaseBdev2 00:10:30.273 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:30.273 06:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:10:30.273 06:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:30.273 06:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:30.273 06:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:30.273 06:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:30.273 06:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:30.532 06:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:30.790 [ 00:10:30.790 { 00:10:30.790 "name": "BaseBdev2", 00:10:30.790 "aliases": [ 00:10:30.790 "94a02ba3-6e0a-4f67-8522-639b80afe36c" 00:10:30.790 ], 00:10:30.790 "product_name": "Malloc disk", 00:10:30.790 "block_size": 512, 00:10:30.790 "num_blocks": 65536, 00:10:30.790 "uuid": "94a02ba3-6e0a-4f67-8522-639b80afe36c", 00:10:30.790 "assigned_rate_limits": { 00:10:30.790 "rw_ios_per_sec": 0, 00:10:30.790 "rw_mbytes_per_sec": 0, 00:10:30.791 "r_mbytes_per_sec": 0, 00:10:30.791 "w_mbytes_per_sec": 0 00:10:30.791 }, 00:10:30.791 "claimed": true, 00:10:30.791 "claim_type": "exclusive_write", 00:10:30.791 "zoned": false, 00:10:30.791 "supported_io_types": { 00:10:30.791 "read": true, 00:10:30.791 "write": true, 00:10:30.791 "unmap": true, 00:10:30.791 "flush": true, 00:10:30.791 "reset": true, 00:10:30.791 "nvme_admin": false, 00:10:30.791 "nvme_io": false, 00:10:30.791 "nvme_io_md": false, 00:10:30.791 "write_zeroes": true, 00:10:30.791 "zcopy": true, 00:10:30.791 "get_zone_info": false, 00:10:30.791 "zone_management": false, 00:10:30.791 "zone_append": false, 00:10:30.791 "compare": false, 00:10:30.791 "compare_and_write": false, 00:10:30.791 "abort": true, 00:10:30.791 "seek_hole": false, 00:10:30.791 "seek_data": false, 00:10:30.791 "copy": true, 00:10:30.791 "nvme_iov_md": false 00:10:30.791 }, 00:10:30.791 "memory_domains": [ 00:10:30.791 { 00:10:30.791 "dma_device_id": "system", 00:10:30.791 "dma_device_type": 1 00:10:30.791 }, 00:10:30.791 { 00:10:30.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.791 "dma_device_type": 2 00:10:30.791 } 00:10:30.791 ], 00:10:30.791 "driver_specific": {} 00:10:30.791 } 00:10:30.791 ] 00:10:30.791 06:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:30.791 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:30.791 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:30.791 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:30.791 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:30.791 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:30.791 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:30.791 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:30.791 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:30.791 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:30.791 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:30.791 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:30.791 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:30.791 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.791 06:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.050 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:31.050 "name": "Existed_Raid", 00:10:31.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.050 "strip_size_kb": 64, 00:10:31.050 "state": "configuring", 00:10:31.050 "raid_level": "concat", 00:10:31.050 "superblock": false, 00:10:31.050 "num_base_bdevs": 3, 00:10:31.050 "num_base_bdevs_discovered": 2, 00:10:31.050 "num_base_bdevs_operational": 3, 00:10:31.050 "base_bdevs_list": [ 00:10:31.050 { 00:10:31.050 "name": "BaseBdev1", 00:10:31.050 "uuid": "1e69bb74-3b15-417a-8be7-1c08f079f26d", 00:10:31.050 "is_configured": true, 00:10:31.050 "data_offset": 0, 00:10:31.050 "data_size": 65536 00:10:31.050 }, 00:10:31.050 { 00:10:31.050 "name": "BaseBdev2", 00:10:31.050 "uuid": "94a02ba3-6e0a-4f67-8522-639b80afe36c", 00:10:31.050 "is_configured": true, 00:10:31.050 "data_offset": 0, 00:10:31.050 "data_size": 65536 00:10:31.050 }, 00:10:31.050 { 00:10:31.050 "name": "BaseBdev3", 00:10:31.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.050 "is_configured": false, 00:10:31.050 "data_offset": 0, 00:10:31.050 "data_size": 0 00:10:31.050 } 00:10:31.050 ] 00:10:31.050 }' 00:10:31.050 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:31.050 06:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.618 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:31.876 [2024-08-14 06:41:58.882389] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.876 [2024-08-14 06:41:58.882555] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:31.876 [2024-08-14 06:41:58.882568] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:31.876 [2024-08-14 06:41:58.882858] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:31.876 [2024-08-14 06:41:58.882990] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:31.876 [2024-08-14 06:41:58.883002] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:31.876 [2024-08-14 06:41:58.883210] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.876 BaseBdev3 00:10:31.876 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:31.876 06:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:10:31.876 06:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:31.876 06:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:31.876 06:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:31.876 06:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:31.876 06:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:31.876 06:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:32.135 [ 00:10:32.135 { 00:10:32.135 "name": "BaseBdev3", 00:10:32.135 "aliases": [ 00:10:32.135 "3163d28b-9ce4-499e-9b9e-cd7fb47419ec" 00:10:32.135 ], 00:10:32.135 "product_name": "Malloc disk", 00:10:32.135 "block_size": 512, 00:10:32.135 "num_blocks": 65536, 00:10:32.135 "uuid": "3163d28b-9ce4-499e-9b9e-cd7fb47419ec", 00:10:32.135 "assigned_rate_limits": { 00:10:32.135 "rw_ios_per_sec": 0, 00:10:32.135 "rw_mbytes_per_sec": 0, 00:10:32.135 "r_mbytes_per_sec": 0, 00:10:32.135 "w_mbytes_per_sec": 0 00:10:32.135 }, 00:10:32.135 "claimed": true, 00:10:32.135 "claim_type": "exclusive_write", 00:10:32.135 "zoned": false, 00:10:32.135 "supported_io_types": { 00:10:32.135 "read": true, 00:10:32.135 "write": true, 00:10:32.135 "unmap": true, 00:10:32.135 "flush": true, 00:10:32.135 "reset": true, 00:10:32.135 "nvme_admin": false, 00:10:32.135 "nvme_io": false, 00:10:32.135 "nvme_io_md": false, 00:10:32.135 "write_zeroes": true, 00:10:32.135 "zcopy": true, 00:10:32.135 "get_zone_info": false, 00:10:32.135 "zone_management": false, 00:10:32.135 "zone_append": false, 00:10:32.135 "compare": false, 00:10:32.135 "compare_and_write": false, 00:10:32.135 "abort": true, 00:10:32.135 "seek_hole": false, 00:10:32.135 "seek_data": false, 00:10:32.135 "copy": true, 00:10:32.135 "nvme_iov_md": false 00:10:32.135 }, 00:10:32.135 "memory_domains": [ 00:10:32.135 { 00:10:32.135 "dma_device_id": "system", 00:10:32.135 "dma_device_type": 1 00:10:32.135 }, 00:10:32.135 { 00:10:32.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.135 "dma_device_type": 2 00:10:32.135 } 00:10:32.135 ], 00:10:32.135 "driver_specific": {} 00:10:32.135 } 00:10:32.135 ] 00:10:32.135 06:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:32.135 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:32.135 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:32.136 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:32.136 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:32.136 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:32.136 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:32.136 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:32.136 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:32.136 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:32.136 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:32.136 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:32.136 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:32.136 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.136 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.394 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:32.394 "name": "Existed_Raid", 00:10:32.394 "uuid": "7ace9bea-8e27-4492-aa48-15cf55352764", 00:10:32.394 "strip_size_kb": 64, 00:10:32.394 "state": "online", 00:10:32.394 "raid_level": "concat", 00:10:32.394 "superblock": false, 00:10:32.394 "num_base_bdevs": 3, 00:10:32.394 "num_base_bdevs_discovered": 3, 00:10:32.394 "num_base_bdevs_operational": 3, 00:10:32.394 "base_bdevs_list": [ 00:10:32.394 { 00:10:32.394 "name": "BaseBdev1", 00:10:32.394 "uuid": "1e69bb74-3b15-417a-8be7-1c08f079f26d", 00:10:32.394 "is_configured": true, 00:10:32.394 "data_offset": 0, 00:10:32.394 "data_size": 65536 00:10:32.394 }, 00:10:32.394 { 00:10:32.394 "name": "BaseBdev2", 00:10:32.394 "uuid": "94a02ba3-6e0a-4f67-8522-639b80afe36c", 00:10:32.394 "is_configured": true, 00:10:32.394 "data_offset": 0, 00:10:32.394 "data_size": 65536 00:10:32.394 }, 00:10:32.394 { 00:10:32.394 "name": "BaseBdev3", 00:10:32.394 "uuid": "3163d28b-9ce4-499e-9b9e-cd7fb47419ec", 00:10:32.394 "is_configured": true, 00:10:32.394 "data_offset": 0, 00:10:32.394 "data_size": 65536 00:10:32.394 } 00:10:32.394 ] 00:10:32.394 }' 00:10:32.394 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:32.394 06:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.961 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:32.961 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:32.961 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:32.961 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:32.961 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:32.961 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:32.961 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:32.961 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:33.221 [2024-08-14 06:42:00.388308] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.221 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:33.221 "name": "Existed_Raid", 00:10:33.221 "aliases": [ 00:10:33.221 "7ace9bea-8e27-4492-aa48-15cf55352764" 00:10:33.221 ], 00:10:33.221 "product_name": "Raid Volume", 00:10:33.221 "block_size": 512, 00:10:33.221 "num_blocks": 196608, 00:10:33.221 "uuid": "7ace9bea-8e27-4492-aa48-15cf55352764", 00:10:33.221 "assigned_rate_limits": { 00:10:33.221 "rw_ios_per_sec": 0, 00:10:33.221 "rw_mbytes_per_sec": 0, 00:10:33.221 "r_mbytes_per_sec": 0, 00:10:33.221 "w_mbytes_per_sec": 0 00:10:33.221 }, 00:10:33.221 "claimed": false, 00:10:33.221 "zoned": false, 00:10:33.221 "supported_io_types": { 00:10:33.221 "read": true, 00:10:33.221 "write": true, 00:10:33.221 "unmap": true, 00:10:33.221 "flush": true, 00:10:33.221 "reset": true, 00:10:33.221 "nvme_admin": false, 00:10:33.221 "nvme_io": false, 00:10:33.221 "nvme_io_md": false, 00:10:33.221 "write_zeroes": true, 00:10:33.221 "zcopy": false, 00:10:33.221 "get_zone_info": false, 00:10:33.221 "zone_management": false, 00:10:33.221 "zone_append": false, 00:10:33.221 "compare": false, 00:10:33.221 "compare_and_write": false, 00:10:33.221 "abort": false, 00:10:33.221 "seek_hole": false, 00:10:33.221 "seek_data": false, 00:10:33.221 "copy": false, 00:10:33.221 "nvme_iov_md": false 00:10:33.221 }, 00:10:33.221 "memory_domains": [ 00:10:33.221 { 00:10:33.221 "dma_device_id": "system", 00:10:33.221 "dma_device_type": 1 00:10:33.221 }, 00:10:33.221 { 00:10:33.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.221 "dma_device_type": 2 00:10:33.221 }, 00:10:33.221 { 00:10:33.221 "dma_device_id": "system", 00:10:33.221 "dma_device_type": 1 00:10:33.221 }, 00:10:33.221 { 00:10:33.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.221 "dma_device_type": 2 00:10:33.221 }, 00:10:33.221 { 00:10:33.221 "dma_device_id": "system", 00:10:33.221 "dma_device_type": 1 00:10:33.221 }, 00:10:33.221 { 00:10:33.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.221 "dma_device_type": 2 00:10:33.221 } 00:10:33.221 ], 00:10:33.221 "driver_specific": { 00:10:33.221 "raid": { 00:10:33.221 "uuid": "7ace9bea-8e27-4492-aa48-15cf55352764", 00:10:33.221 "strip_size_kb": 64, 00:10:33.221 "state": "online", 00:10:33.221 "raid_level": "concat", 00:10:33.221 "superblock": false, 00:10:33.221 "num_base_bdevs": 3, 00:10:33.221 "num_base_bdevs_discovered": 3, 00:10:33.221 "num_base_bdevs_operational": 3, 00:10:33.221 "base_bdevs_list": [ 00:10:33.221 { 00:10:33.221 "name": "BaseBdev1", 00:10:33.221 "uuid": "1e69bb74-3b15-417a-8be7-1c08f079f26d", 00:10:33.221 "is_configured": true, 00:10:33.221 "data_offset": 0, 00:10:33.221 "data_size": 65536 00:10:33.222 }, 00:10:33.222 { 00:10:33.222 "name": "BaseBdev2", 00:10:33.222 "uuid": "94a02ba3-6e0a-4f67-8522-639b80afe36c", 00:10:33.222 "is_configured": true, 00:10:33.222 "data_offset": 0, 00:10:33.222 "data_size": 65536 00:10:33.222 }, 00:10:33.222 { 00:10:33.222 "name": "BaseBdev3", 00:10:33.222 "uuid": "3163d28b-9ce4-499e-9b9e-cd7fb47419ec", 00:10:33.222 "is_configured": true, 00:10:33.222 "data_offset": 0, 00:10:33.222 "data_size": 65536 00:10:33.222 } 00:10:33.222 ] 00:10:33.222 } 00:10:33.222 } 00:10:33.222 }' 00:10:33.222 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.222 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:33.222 BaseBdev2 00:10:33.222 BaseBdev3' 00:10:33.222 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:33.222 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:33.222 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:33.482 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:33.482 "name": "BaseBdev1", 00:10:33.482 "aliases": [ 00:10:33.482 "1e69bb74-3b15-417a-8be7-1c08f079f26d" 00:10:33.482 ], 00:10:33.482 "product_name": "Malloc disk", 00:10:33.482 "block_size": 512, 00:10:33.482 "num_blocks": 65536, 00:10:33.482 "uuid": "1e69bb74-3b15-417a-8be7-1c08f079f26d", 00:10:33.482 "assigned_rate_limits": { 00:10:33.482 "rw_ios_per_sec": 0, 00:10:33.482 "rw_mbytes_per_sec": 0, 00:10:33.482 "r_mbytes_per_sec": 0, 00:10:33.482 "w_mbytes_per_sec": 0 00:10:33.482 }, 00:10:33.482 "claimed": true, 00:10:33.482 "claim_type": "exclusive_write", 00:10:33.482 "zoned": false, 00:10:33.482 "supported_io_types": { 00:10:33.482 "read": true, 00:10:33.482 "write": true, 00:10:33.482 "unmap": true, 00:10:33.482 "flush": true, 00:10:33.482 "reset": true, 00:10:33.482 "nvme_admin": false, 00:10:33.482 "nvme_io": false, 00:10:33.482 "nvme_io_md": false, 00:10:33.482 "write_zeroes": true, 00:10:33.482 "zcopy": true, 00:10:33.482 "get_zone_info": false, 00:10:33.482 "zone_management": false, 00:10:33.482 "zone_append": false, 00:10:33.482 "compare": false, 00:10:33.482 "compare_and_write": false, 00:10:33.482 "abort": true, 00:10:33.482 "seek_hole": false, 00:10:33.482 "seek_data": false, 00:10:33.482 "copy": true, 00:10:33.482 "nvme_iov_md": false 00:10:33.482 }, 00:10:33.482 "memory_domains": [ 00:10:33.482 { 00:10:33.482 "dma_device_id": "system", 00:10:33.482 "dma_device_type": 1 00:10:33.482 }, 00:10:33.482 { 00:10:33.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.482 "dma_device_type": 2 00:10:33.482 } 00:10:33.482 ], 00:10:33.482 "driver_specific": {} 00:10:33.482 }' 00:10:33.482 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:33.482 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:33.742 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:33.742 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:33.742 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:33.742 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:33.742 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:33.742 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:33.742 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:33.742 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:33.742 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:34.001 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:34.001 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:34.001 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:34.001 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:34.001 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:34.001 "name": "BaseBdev2", 00:10:34.001 "aliases": [ 00:10:34.001 "94a02ba3-6e0a-4f67-8522-639b80afe36c" 00:10:34.001 ], 00:10:34.001 "product_name": "Malloc disk", 00:10:34.001 "block_size": 512, 00:10:34.001 "num_blocks": 65536, 00:10:34.001 "uuid": "94a02ba3-6e0a-4f67-8522-639b80afe36c", 00:10:34.001 "assigned_rate_limits": { 00:10:34.001 "rw_ios_per_sec": 0, 00:10:34.001 "rw_mbytes_per_sec": 0, 00:10:34.001 "r_mbytes_per_sec": 0, 00:10:34.001 "w_mbytes_per_sec": 0 00:10:34.001 }, 00:10:34.001 "claimed": true, 00:10:34.001 "claim_type": "exclusive_write", 00:10:34.001 "zoned": false, 00:10:34.001 "supported_io_types": { 00:10:34.001 "read": true, 00:10:34.001 "write": true, 00:10:34.001 "unmap": true, 00:10:34.001 "flush": true, 00:10:34.001 "reset": true, 00:10:34.001 "nvme_admin": false, 00:10:34.001 "nvme_io": false, 00:10:34.001 "nvme_io_md": false, 00:10:34.001 "write_zeroes": true, 00:10:34.001 "zcopy": true, 00:10:34.001 "get_zone_info": false, 00:10:34.001 "zone_management": false, 00:10:34.001 "zone_append": false, 00:10:34.001 "compare": false, 00:10:34.001 "compare_and_write": false, 00:10:34.001 "abort": true, 00:10:34.001 "seek_hole": false, 00:10:34.001 "seek_data": false, 00:10:34.001 "copy": true, 00:10:34.001 "nvme_iov_md": false 00:10:34.001 }, 00:10:34.001 "memory_domains": [ 00:10:34.001 { 00:10:34.001 "dma_device_id": "system", 00:10:34.001 "dma_device_type": 1 00:10:34.001 }, 00:10:34.001 { 00:10:34.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.001 "dma_device_type": 2 00:10:34.001 } 00:10:34.001 ], 00:10:34.001 "driver_specific": {} 00:10:34.001 }' 00:10:34.001 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:34.261 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:34.261 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:34.261 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:34.261 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:34.261 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:34.261 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:34.261 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:34.521 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:34.521 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:34.521 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:34.521 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:34.521 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:34.521 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:34.521 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:34.780 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:34.780 "name": "BaseBdev3", 00:10:34.780 "aliases": [ 00:10:34.780 "3163d28b-9ce4-499e-9b9e-cd7fb47419ec" 00:10:34.780 ], 00:10:34.780 "product_name": "Malloc disk", 00:10:34.780 "block_size": 512, 00:10:34.780 "num_blocks": 65536, 00:10:34.780 "uuid": "3163d28b-9ce4-499e-9b9e-cd7fb47419ec", 00:10:34.780 "assigned_rate_limits": { 00:10:34.780 "rw_ios_per_sec": 0, 00:10:34.780 "rw_mbytes_per_sec": 0, 00:10:34.780 "r_mbytes_per_sec": 0, 00:10:34.780 "w_mbytes_per_sec": 0 00:10:34.780 }, 00:10:34.781 "claimed": true, 00:10:34.781 "claim_type": "exclusive_write", 00:10:34.781 "zoned": false, 00:10:34.781 "supported_io_types": { 00:10:34.781 "read": true, 00:10:34.781 "write": true, 00:10:34.781 "unmap": true, 00:10:34.781 "flush": true, 00:10:34.781 "reset": true, 00:10:34.781 "nvme_admin": false, 00:10:34.781 "nvme_io": false, 00:10:34.781 "nvme_io_md": false, 00:10:34.781 "write_zeroes": true, 00:10:34.781 "zcopy": true, 00:10:34.781 "get_zone_info": false, 00:10:34.781 "zone_management": false, 00:10:34.781 "zone_append": false, 00:10:34.781 "compare": false, 00:10:34.781 "compare_and_write": false, 00:10:34.781 "abort": true, 00:10:34.781 "seek_hole": false, 00:10:34.781 "seek_data": false, 00:10:34.781 "copy": true, 00:10:34.781 "nvme_iov_md": false 00:10:34.781 }, 00:10:34.781 "memory_domains": [ 00:10:34.781 { 00:10:34.781 "dma_device_id": "system", 00:10:34.781 "dma_device_type": 1 00:10:34.781 }, 00:10:34.781 { 00:10:34.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.781 "dma_device_type": 2 00:10:34.781 } 00:10:34.781 ], 00:10:34.781 "driver_specific": {} 00:10:34.781 }' 00:10:34.781 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:34.781 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:34.781 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:34.781 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:34.781 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:34.781 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:34.781 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:34.781 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:35.040 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:35.040 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:35.040 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:35.040 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:35.040 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:35.300 [2024-08-14 06:42:02.332793] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:35.300 [2024-08-14 06:42:02.332834] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.300 [2024-08-14 06:42:02.332898] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.300 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.561 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:35.561 "name": "Existed_Raid", 00:10:35.561 "uuid": "7ace9bea-8e27-4492-aa48-15cf55352764", 00:10:35.561 "strip_size_kb": 64, 00:10:35.561 "state": "offline", 00:10:35.561 "raid_level": "concat", 00:10:35.561 "superblock": false, 00:10:35.561 "num_base_bdevs": 3, 00:10:35.561 "num_base_bdevs_discovered": 2, 00:10:35.561 "num_base_bdevs_operational": 2, 00:10:35.561 "base_bdevs_list": [ 00:10:35.561 { 00:10:35.561 "name": null, 00:10:35.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.561 "is_configured": false, 00:10:35.561 "data_offset": 0, 00:10:35.561 "data_size": 65536 00:10:35.561 }, 00:10:35.561 { 00:10:35.561 "name": "BaseBdev2", 00:10:35.561 "uuid": "94a02ba3-6e0a-4f67-8522-639b80afe36c", 00:10:35.561 "is_configured": true, 00:10:35.561 "data_offset": 0, 00:10:35.561 "data_size": 65536 00:10:35.561 }, 00:10:35.561 { 00:10:35.561 "name": "BaseBdev3", 00:10:35.561 "uuid": "3163d28b-9ce4-499e-9b9e-cd7fb47419ec", 00:10:35.561 "is_configured": true, 00:10:35.561 "data_offset": 0, 00:10:35.561 "data_size": 65536 00:10:35.561 } 00:10:35.561 ] 00:10:35.561 }' 00:10:35.561 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:35.561 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.137 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:36.137 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:36.137 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.137 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:36.396 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:36.396 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.396 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:36.396 [2024-08-14 06:42:03.614164] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.396 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:36.396 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:36.396 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:36.396 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.654 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:36.654 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.654 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:36.912 [2024-08-14 06:42:04.081164] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:36.912 [2024-08-14 06:42:04.081250] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:36.912 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:36.912 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:36.912 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.912 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:37.171 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:37.171 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:37.171 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:37.171 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:37.171 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:37.171 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:37.430 BaseBdev2 00:10:37.431 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:37.431 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:10:37.431 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:37.431 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:37.431 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:37.431 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:37.431 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:37.690 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:37.949 [ 00:10:37.949 { 00:10:37.949 "name": "BaseBdev2", 00:10:37.949 "aliases": [ 00:10:37.949 "0a799778-8840-4a55-9ee7-39b86e305b31" 00:10:37.949 ], 00:10:37.949 "product_name": "Malloc disk", 00:10:37.949 "block_size": 512, 00:10:37.949 "num_blocks": 65536, 00:10:37.949 "uuid": "0a799778-8840-4a55-9ee7-39b86e305b31", 00:10:37.949 "assigned_rate_limits": { 00:10:37.949 "rw_ios_per_sec": 0, 00:10:37.949 "rw_mbytes_per_sec": 0, 00:10:37.949 "r_mbytes_per_sec": 0, 00:10:37.949 "w_mbytes_per_sec": 0 00:10:37.949 }, 00:10:37.949 "claimed": false, 00:10:37.949 "zoned": false, 00:10:37.949 "supported_io_types": { 00:10:37.949 "read": true, 00:10:37.949 "write": true, 00:10:37.949 "unmap": true, 00:10:37.949 "flush": true, 00:10:37.949 "reset": true, 00:10:37.949 "nvme_admin": false, 00:10:37.949 "nvme_io": false, 00:10:37.949 "nvme_io_md": false, 00:10:37.949 "write_zeroes": true, 00:10:37.949 "zcopy": true, 00:10:37.949 "get_zone_info": false, 00:10:37.949 "zone_management": false, 00:10:37.949 "zone_append": false, 00:10:37.949 "compare": false, 00:10:37.949 "compare_and_write": false, 00:10:37.949 "abort": true, 00:10:37.949 "seek_hole": false, 00:10:37.949 "seek_data": false, 00:10:37.949 "copy": true, 00:10:37.949 "nvme_iov_md": false 00:10:37.949 }, 00:10:37.949 "memory_domains": [ 00:10:37.949 { 00:10:37.949 "dma_device_id": "system", 00:10:37.949 "dma_device_type": 1 00:10:37.949 }, 00:10:37.949 { 00:10:37.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.949 "dma_device_type": 2 00:10:37.949 } 00:10:37.949 ], 00:10:37.949 "driver_specific": {} 00:10:37.949 } 00:10:37.949 ] 00:10:37.949 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:37.949 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:37.949 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:37.949 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:38.209 BaseBdev3 00:10:38.209 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:38.209 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:10:38.209 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:38.209 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:38.209 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:38.209 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:38.209 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:38.468 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:38.728 [ 00:10:38.728 { 00:10:38.728 "name": "BaseBdev3", 00:10:38.728 "aliases": [ 00:10:38.728 "d476f73b-eb37-4891-919a-23a0d10e8f23" 00:10:38.728 ], 00:10:38.728 "product_name": "Malloc disk", 00:10:38.728 "block_size": 512, 00:10:38.728 "num_blocks": 65536, 00:10:38.728 "uuid": "d476f73b-eb37-4891-919a-23a0d10e8f23", 00:10:38.728 "assigned_rate_limits": { 00:10:38.728 "rw_ios_per_sec": 0, 00:10:38.728 "rw_mbytes_per_sec": 0, 00:10:38.728 "r_mbytes_per_sec": 0, 00:10:38.728 "w_mbytes_per_sec": 0 00:10:38.728 }, 00:10:38.728 "claimed": false, 00:10:38.728 "zoned": false, 00:10:38.728 "supported_io_types": { 00:10:38.728 "read": true, 00:10:38.728 "write": true, 00:10:38.728 "unmap": true, 00:10:38.728 "flush": true, 00:10:38.728 "reset": true, 00:10:38.728 "nvme_admin": false, 00:10:38.728 "nvme_io": false, 00:10:38.728 "nvme_io_md": false, 00:10:38.728 "write_zeroes": true, 00:10:38.728 "zcopy": true, 00:10:38.728 "get_zone_info": false, 00:10:38.728 "zone_management": false, 00:10:38.728 "zone_append": false, 00:10:38.728 "compare": false, 00:10:38.728 "compare_and_write": false, 00:10:38.728 "abort": true, 00:10:38.728 "seek_hole": false, 00:10:38.728 "seek_data": false, 00:10:38.728 "copy": true, 00:10:38.728 "nvme_iov_md": false 00:10:38.728 }, 00:10:38.728 "memory_domains": [ 00:10:38.728 { 00:10:38.728 "dma_device_id": "system", 00:10:38.728 "dma_device_type": 1 00:10:38.728 }, 00:10:38.728 { 00:10:38.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.728 "dma_device_type": 2 00:10:38.728 } 00:10:38.728 ], 00:10:38.728 "driver_specific": {} 00:10:38.728 } 00:10:38.728 ] 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:38.728 [2024-08-14 06:42:05.944375] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.728 [2024-08-14 06:42:05.944537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.728 [2024-08-14 06:42:05.944569] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.728 [2024-08-14 06:42:05.946611] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.728 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.988 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:38.988 "name": "Existed_Raid", 00:10:38.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.988 "strip_size_kb": 64, 00:10:38.988 "state": "configuring", 00:10:38.988 "raid_level": "concat", 00:10:38.988 "superblock": false, 00:10:38.988 "num_base_bdevs": 3, 00:10:38.988 "num_base_bdevs_discovered": 2, 00:10:38.988 "num_base_bdevs_operational": 3, 00:10:38.988 "base_bdevs_list": [ 00:10:38.988 { 00:10:38.988 "name": "BaseBdev1", 00:10:38.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.988 "is_configured": false, 00:10:38.988 "data_offset": 0, 00:10:38.988 "data_size": 0 00:10:38.988 }, 00:10:38.988 { 00:10:38.988 "name": "BaseBdev2", 00:10:38.988 "uuid": "0a799778-8840-4a55-9ee7-39b86e305b31", 00:10:38.988 "is_configured": true, 00:10:38.988 "data_offset": 0, 00:10:38.988 "data_size": 65536 00:10:38.988 }, 00:10:38.988 { 00:10:38.988 "name": "BaseBdev3", 00:10:38.988 "uuid": "d476f73b-eb37-4891-919a-23a0d10e8f23", 00:10:38.988 "is_configured": true, 00:10:38.988 "data_offset": 0, 00:10:38.988 "data_size": 65536 00:10:38.988 } 00:10:38.988 ] 00:10:38.988 }' 00:10:38.988 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:38.988 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.556 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:39.815 [2024-08-14 06:42:07.002578] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:39.815 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:39.815 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:39.816 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:39.816 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:39.816 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:39.816 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:39.816 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:39.816 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:39.816 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:39.816 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:39.816 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.816 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.076 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:40.076 "name": "Existed_Raid", 00:10:40.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.076 "strip_size_kb": 64, 00:10:40.076 "state": "configuring", 00:10:40.076 "raid_level": "concat", 00:10:40.076 "superblock": false, 00:10:40.076 "num_base_bdevs": 3, 00:10:40.076 "num_base_bdevs_discovered": 1, 00:10:40.076 "num_base_bdevs_operational": 3, 00:10:40.076 "base_bdevs_list": [ 00:10:40.076 { 00:10:40.076 "name": "BaseBdev1", 00:10:40.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.076 "is_configured": false, 00:10:40.076 "data_offset": 0, 00:10:40.076 "data_size": 0 00:10:40.076 }, 00:10:40.076 { 00:10:40.076 "name": null, 00:10:40.076 "uuid": "0a799778-8840-4a55-9ee7-39b86e305b31", 00:10:40.076 "is_configured": false, 00:10:40.076 "data_offset": 0, 00:10:40.076 "data_size": 65536 00:10:40.076 }, 00:10:40.076 { 00:10:40.076 "name": "BaseBdev3", 00:10:40.076 "uuid": "d476f73b-eb37-4891-919a-23a0d10e8f23", 00:10:40.076 "is_configured": true, 00:10:40.076 "data_offset": 0, 00:10:40.076 "data_size": 65536 00:10:40.076 } 00:10:40.076 ] 00:10:40.076 }' 00:10:40.076 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:40.076 06:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.646 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:40.646 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:40.905 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:40.905 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:41.164 [2024-08-14 06:42:08.327561] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.164 BaseBdev1 00:10:41.164 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:41.164 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:10:41.164 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:41.164 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:41.164 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:41.164 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:41.164 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:41.422 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:41.680 [ 00:10:41.680 { 00:10:41.680 "name": "BaseBdev1", 00:10:41.680 "aliases": [ 00:10:41.680 "b88cafdd-d51b-45fb-ac0c-73d8154609e1" 00:10:41.680 ], 00:10:41.680 "product_name": "Malloc disk", 00:10:41.680 "block_size": 512, 00:10:41.680 "num_blocks": 65536, 00:10:41.680 "uuid": "b88cafdd-d51b-45fb-ac0c-73d8154609e1", 00:10:41.680 "assigned_rate_limits": { 00:10:41.680 "rw_ios_per_sec": 0, 00:10:41.680 "rw_mbytes_per_sec": 0, 00:10:41.680 "r_mbytes_per_sec": 0, 00:10:41.680 "w_mbytes_per_sec": 0 00:10:41.680 }, 00:10:41.680 "claimed": true, 00:10:41.680 "claim_type": "exclusive_write", 00:10:41.680 "zoned": false, 00:10:41.680 "supported_io_types": { 00:10:41.680 "read": true, 00:10:41.680 "write": true, 00:10:41.680 "unmap": true, 00:10:41.680 "flush": true, 00:10:41.680 "reset": true, 00:10:41.680 "nvme_admin": false, 00:10:41.680 "nvme_io": false, 00:10:41.680 "nvme_io_md": false, 00:10:41.680 "write_zeroes": true, 00:10:41.680 "zcopy": true, 00:10:41.680 "get_zone_info": false, 00:10:41.680 "zone_management": false, 00:10:41.680 "zone_append": false, 00:10:41.680 "compare": false, 00:10:41.680 "compare_and_write": false, 00:10:41.680 "abort": true, 00:10:41.680 "seek_hole": false, 00:10:41.680 "seek_data": false, 00:10:41.680 "copy": true, 00:10:41.680 "nvme_iov_md": false 00:10:41.680 }, 00:10:41.680 "memory_domains": [ 00:10:41.680 { 00:10:41.680 "dma_device_id": "system", 00:10:41.680 "dma_device_type": 1 00:10:41.680 }, 00:10:41.680 { 00:10:41.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.680 "dma_device_type": 2 00:10:41.680 } 00:10:41.680 ], 00:10:41.680 "driver_specific": {} 00:10:41.680 } 00:10:41.680 ] 00:10:41.680 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:41.680 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.680 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:41.680 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:41.680 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:41.680 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:41.680 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:41.680 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:41.680 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:41.680 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:41.680 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:41.680 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.680 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.939 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:41.939 "name": "Existed_Raid", 00:10:41.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.939 "strip_size_kb": 64, 00:10:41.939 "state": "configuring", 00:10:41.939 "raid_level": "concat", 00:10:41.939 "superblock": false, 00:10:41.939 "num_base_bdevs": 3, 00:10:41.939 "num_base_bdevs_discovered": 2, 00:10:41.939 "num_base_bdevs_operational": 3, 00:10:41.939 "base_bdevs_list": [ 00:10:41.939 { 00:10:41.939 "name": "BaseBdev1", 00:10:41.939 "uuid": "b88cafdd-d51b-45fb-ac0c-73d8154609e1", 00:10:41.939 "is_configured": true, 00:10:41.939 "data_offset": 0, 00:10:41.939 "data_size": 65536 00:10:41.939 }, 00:10:41.939 { 00:10:41.939 "name": null, 00:10:41.939 "uuid": "0a799778-8840-4a55-9ee7-39b86e305b31", 00:10:41.939 "is_configured": false, 00:10:41.939 "data_offset": 0, 00:10:41.939 "data_size": 65536 00:10:41.939 }, 00:10:41.939 { 00:10:41.939 "name": "BaseBdev3", 00:10:41.939 "uuid": "d476f73b-eb37-4891-919a-23a0d10e8f23", 00:10:41.939 "is_configured": true, 00:10:41.939 "data_offset": 0, 00:10:41.939 "data_size": 65536 00:10:41.939 } 00:10:41.939 ] 00:10:41.939 }' 00:10:41.939 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:41.939 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.507 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.507 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:42.766 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:42.766 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:43.023 [2024-08-14 06:42:10.136694] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.023 06:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.023 06:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:43.024 06:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:43.024 06:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:43.024 06:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:43.024 06:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:43.024 06:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:43.024 06:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:43.024 06:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:43.024 06:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:43.024 06:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.024 06:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.281 06:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:43.281 "name": "Existed_Raid", 00:10:43.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.281 "strip_size_kb": 64, 00:10:43.281 "state": "configuring", 00:10:43.281 "raid_level": "concat", 00:10:43.281 "superblock": false, 00:10:43.281 "num_base_bdevs": 3, 00:10:43.281 "num_base_bdevs_discovered": 1, 00:10:43.281 "num_base_bdevs_operational": 3, 00:10:43.281 "base_bdevs_list": [ 00:10:43.281 { 00:10:43.281 "name": "BaseBdev1", 00:10:43.281 "uuid": "b88cafdd-d51b-45fb-ac0c-73d8154609e1", 00:10:43.281 "is_configured": true, 00:10:43.282 "data_offset": 0, 00:10:43.282 "data_size": 65536 00:10:43.282 }, 00:10:43.282 { 00:10:43.282 "name": null, 00:10:43.282 "uuid": "0a799778-8840-4a55-9ee7-39b86e305b31", 00:10:43.282 "is_configured": false, 00:10:43.282 "data_offset": 0, 00:10:43.282 "data_size": 65536 00:10:43.282 }, 00:10:43.282 { 00:10:43.282 "name": null, 00:10:43.282 "uuid": "d476f73b-eb37-4891-919a-23a0d10e8f23", 00:10:43.282 "is_configured": false, 00:10:43.282 "data_offset": 0, 00:10:43.282 "data_size": 65536 00:10:43.282 } 00:10:43.282 ] 00:10:43.282 }' 00:10:43.282 06:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:43.282 06:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.848 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:43.848 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.107 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:44.107 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:44.367 [2024-08-14 06:42:11.506399] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.367 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:44.367 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:44.367 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:44.367 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:44.367 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:44.367 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:44.367 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:44.367 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:44.367 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:44.367 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:44.367 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.367 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.626 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:44.626 "name": "Existed_Raid", 00:10:44.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.626 "strip_size_kb": 64, 00:10:44.626 "state": "configuring", 00:10:44.626 "raid_level": "concat", 00:10:44.626 "superblock": false, 00:10:44.626 "num_base_bdevs": 3, 00:10:44.626 "num_base_bdevs_discovered": 2, 00:10:44.626 "num_base_bdevs_operational": 3, 00:10:44.626 "base_bdevs_list": [ 00:10:44.626 { 00:10:44.626 "name": "BaseBdev1", 00:10:44.626 "uuid": "b88cafdd-d51b-45fb-ac0c-73d8154609e1", 00:10:44.626 "is_configured": true, 00:10:44.626 "data_offset": 0, 00:10:44.626 "data_size": 65536 00:10:44.626 }, 00:10:44.626 { 00:10:44.626 "name": null, 00:10:44.626 "uuid": "0a799778-8840-4a55-9ee7-39b86e305b31", 00:10:44.626 "is_configured": false, 00:10:44.626 "data_offset": 0, 00:10:44.626 "data_size": 65536 00:10:44.626 }, 00:10:44.626 { 00:10:44.626 "name": "BaseBdev3", 00:10:44.626 "uuid": "d476f73b-eb37-4891-919a-23a0d10e8f23", 00:10:44.626 "is_configured": true, 00:10:44.626 "data_offset": 0, 00:10:44.626 "data_size": 65536 00:10:44.626 } 00:10:44.626 ] 00:10:44.626 }' 00:10:44.626 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:44.626 06:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.195 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.195 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:45.453 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:45.454 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:45.712 [2024-08-14 06:42:12.792417] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.712 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:45.712 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:45.712 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:45.712 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:45.712 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:45.712 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:45.712 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:45.712 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:45.712 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:45.712 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:45.712 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.712 06:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.971 06:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:45.971 "name": "Existed_Raid", 00:10:45.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.971 "strip_size_kb": 64, 00:10:45.971 "state": "configuring", 00:10:45.971 "raid_level": "concat", 00:10:45.971 "superblock": false, 00:10:45.971 "num_base_bdevs": 3, 00:10:45.971 "num_base_bdevs_discovered": 1, 00:10:45.971 "num_base_bdevs_operational": 3, 00:10:45.971 "base_bdevs_list": [ 00:10:45.971 { 00:10:45.971 "name": null, 00:10:45.971 "uuid": "b88cafdd-d51b-45fb-ac0c-73d8154609e1", 00:10:45.971 "is_configured": false, 00:10:45.971 "data_offset": 0, 00:10:45.971 "data_size": 65536 00:10:45.971 }, 00:10:45.971 { 00:10:45.971 "name": null, 00:10:45.971 "uuid": "0a799778-8840-4a55-9ee7-39b86e305b31", 00:10:45.971 "is_configured": false, 00:10:45.971 "data_offset": 0, 00:10:45.971 "data_size": 65536 00:10:45.971 }, 00:10:45.971 { 00:10:45.971 "name": "BaseBdev3", 00:10:45.971 "uuid": "d476f73b-eb37-4891-919a-23a0d10e8f23", 00:10:45.971 "is_configured": true, 00:10:45.971 "data_offset": 0, 00:10:45.971 "data_size": 65536 00:10:45.971 } 00:10:45.971 ] 00:10:45.971 }' 00:10:45.971 06:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:45.971 06:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.537 06:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:46.537 06:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:46.795 06:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:46.795 06:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:47.053 [2024-08-14 06:42:14.073228] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.053 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:47.053 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:47.053 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:47.053 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:47.053 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:47.053 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:47.053 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:47.053 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:47.053 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:47.053 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:47.053 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.053 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.311 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:47.311 "name": "Existed_Raid", 00:10:47.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.311 "strip_size_kb": 64, 00:10:47.311 "state": "configuring", 00:10:47.311 "raid_level": "concat", 00:10:47.311 "superblock": false, 00:10:47.311 "num_base_bdevs": 3, 00:10:47.311 "num_base_bdevs_discovered": 2, 00:10:47.311 "num_base_bdevs_operational": 3, 00:10:47.311 "base_bdevs_list": [ 00:10:47.311 { 00:10:47.311 "name": null, 00:10:47.311 "uuid": "b88cafdd-d51b-45fb-ac0c-73d8154609e1", 00:10:47.311 "is_configured": false, 00:10:47.311 "data_offset": 0, 00:10:47.311 "data_size": 65536 00:10:47.311 }, 00:10:47.311 { 00:10:47.311 "name": "BaseBdev2", 00:10:47.311 "uuid": "0a799778-8840-4a55-9ee7-39b86e305b31", 00:10:47.311 "is_configured": true, 00:10:47.311 "data_offset": 0, 00:10:47.311 "data_size": 65536 00:10:47.311 }, 00:10:47.311 { 00:10:47.311 "name": "BaseBdev3", 00:10:47.311 "uuid": "d476f73b-eb37-4891-919a-23a0d10e8f23", 00:10:47.311 "is_configured": true, 00:10:47.311 "data_offset": 0, 00:10:47.311 "data_size": 65536 00:10:47.311 } 00:10:47.311 ] 00:10:47.311 }' 00:10:47.311 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:47.311 06:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.880 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.880 06:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:48.139 06:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:48.139 06:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.139 06:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:48.411 06:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b88cafdd-d51b-45fb-ac0c-73d8154609e1 00:10:48.411 [2024-08-14 06:42:15.625998] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:48.411 [2024-08-14 06:42:15.626137] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:48.411 [2024-08-14 06:42:15.626162] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:48.411 [2024-08-14 06:42:15.626473] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:10:48.411 [2024-08-14 06:42:15.626644] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:48.411 [2024-08-14 06:42:15.626691] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:48.411 [2024-08-14 06:42:15.626924] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.411 NewBaseBdev 00:10:48.411 06:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:10:48.411 06:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:10:48.411 06:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:48.411 06:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:48.411 06:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:48.411 06:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:48.411 06:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:48.689 06:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:48.948 [ 00:10:48.948 { 00:10:48.948 "name": "NewBaseBdev", 00:10:48.948 "aliases": [ 00:10:48.948 "b88cafdd-d51b-45fb-ac0c-73d8154609e1" 00:10:48.948 ], 00:10:48.948 "product_name": "Malloc disk", 00:10:48.948 "block_size": 512, 00:10:48.948 "num_blocks": 65536, 00:10:48.948 "uuid": "b88cafdd-d51b-45fb-ac0c-73d8154609e1", 00:10:48.948 "assigned_rate_limits": { 00:10:48.948 "rw_ios_per_sec": 0, 00:10:48.948 "rw_mbytes_per_sec": 0, 00:10:48.948 "r_mbytes_per_sec": 0, 00:10:48.948 "w_mbytes_per_sec": 0 00:10:48.948 }, 00:10:48.948 "claimed": true, 00:10:48.948 "claim_type": "exclusive_write", 00:10:48.948 "zoned": false, 00:10:48.948 "supported_io_types": { 00:10:48.948 "read": true, 00:10:48.948 "write": true, 00:10:48.948 "unmap": true, 00:10:48.948 "flush": true, 00:10:48.949 "reset": true, 00:10:48.949 "nvme_admin": false, 00:10:48.949 "nvme_io": false, 00:10:48.949 "nvme_io_md": false, 00:10:48.949 "write_zeroes": true, 00:10:48.949 "zcopy": true, 00:10:48.949 "get_zone_info": false, 00:10:48.949 "zone_management": false, 00:10:48.949 "zone_append": false, 00:10:48.949 "compare": false, 00:10:48.949 "compare_and_write": false, 00:10:48.949 "abort": true, 00:10:48.949 "seek_hole": false, 00:10:48.949 "seek_data": false, 00:10:48.949 "copy": true, 00:10:48.949 "nvme_iov_md": false 00:10:48.949 }, 00:10:48.949 "memory_domains": [ 00:10:48.949 { 00:10:48.949 "dma_device_id": "system", 00:10:48.949 "dma_device_type": 1 00:10:48.949 }, 00:10:48.949 { 00:10:48.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.949 "dma_device_type": 2 00:10:48.949 } 00:10:48.949 ], 00:10:48.949 "driver_specific": {} 00:10:48.949 } 00:10:48.949 ] 00:10:48.949 06:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:48.949 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:48.949 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:48.949 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:48.949 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:48.949 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:48.949 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:48.949 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:48.949 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:48.949 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:48.949 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:48.949 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.949 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.208 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.208 "name": "Existed_Raid", 00:10:49.208 "uuid": "1f538aef-d8d0-4501-8b04-27e67a78db3f", 00:10:49.208 "strip_size_kb": 64, 00:10:49.208 "state": "online", 00:10:49.208 "raid_level": "concat", 00:10:49.208 "superblock": false, 00:10:49.208 "num_base_bdevs": 3, 00:10:49.208 "num_base_bdevs_discovered": 3, 00:10:49.208 "num_base_bdevs_operational": 3, 00:10:49.208 "base_bdevs_list": [ 00:10:49.208 { 00:10:49.208 "name": "NewBaseBdev", 00:10:49.208 "uuid": "b88cafdd-d51b-45fb-ac0c-73d8154609e1", 00:10:49.208 "is_configured": true, 00:10:49.208 "data_offset": 0, 00:10:49.208 "data_size": 65536 00:10:49.208 }, 00:10:49.208 { 00:10:49.208 "name": "BaseBdev2", 00:10:49.208 "uuid": "0a799778-8840-4a55-9ee7-39b86e305b31", 00:10:49.208 "is_configured": true, 00:10:49.208 "data_offset": 0, 00:10:49.208 "data_size": 65536 00:10:49.208 }, 00:10:49.208 { 00:10:49.208 "name": "BaseBdev3", 00:10:49.208 "uuid": "d476f73b-eb37-4891-919a-23a0d10e8f23", 00:10:49.208 "is_configured": true, 00:10:49.208 "data_offset": 0, 00:10:49.208 "data_size": 65536 00:10:49.208 } 00:10:49.208 ] 00:10:49.208 }' 00:10:49.208 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.208 06:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.780 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:49.780 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:49.780 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:49.780 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:49.780 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:49.780 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:49.780 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:49.780 06:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:50.041 [2024-08-14 06:42:17.203832] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.041 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:50.041 "name": "Existed_Raid", 00:10:50.041 "aliases": [ 00:10:50.041 "1f538aef-d8d0-4501-8b04-27e67a78db3f" 00:10:50.041 ], 00:10:50.041 "product_name": "Raid Volume", 00:10:50.041 "block_size": 512, 00:10:50.041 "num_blocks": 196608, 00:10:50.041 "uuid": "1f538aef-d8d0-4501-8b04-27e67a78db3f", 00:10:50.041 "assigned_rate_limits": { 00:10:50.041 "rw_ios_per_sec": 0, 00:10:50.041 "rw_mbytes_per_sec": 0, 00:10:50.041 "r_mbytes_per_sec": 0, 00:10:50.041 "w_mbytes_per_sec": 0 00:10:50.041 }, 00:10:50.041 "claimed": false, 00:10:50.041 "zoned": false, 00:10:50.041 "supported_io_types": { 00:10:50.041 "read": true, 00:10:50.041 "write": true, 00:10:50.041 "unmap": true, 00:10:50.041 "flush": true, 00:10:50.041 "reset": true, 00:10:50.041 "nvme_admin": false, 00:10:50.041 "nvme_io": false, 00:10:50.041 "nvme_io_md": false, 00:10:50.041 "write_zeroes": true, 00:10:50.041 "zcopy": false, 00:10:50.041 "get_zone_info": false, 00:10:50.041 "zone_management": false, 00:10:50.041 "zone_append": false, 00:10:50.041 "compare": false, 00:10:50.041 "compare_and_write": false, 00:10:50.041 "abort": false, 00:10:50.041 "seek_hole": false, 00:10:50.041 "seek_data": false, 00:10:50.041 "copy": false, 00:10:50.041 "nvme_iov_md": false 00:10:50.041 }, 00:10:50.041 "memory_domains": [ 00:10:50.041 { 00:10:50.041 "dma_device_id": "system", 00:10:50.041 "dma_device_type": 1 00:10:50.041 }, 00:10:50.041 { 00:10:50.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.041 "dma_device_type": 2 00:10:50.041 }, 00:10:50.041 { 00:10:50.041 "dma_device_id": "system", 00:10:50.041 "dma_device_type": 1 00:10:50.041 }, 00:10:50.041 { 00:10:50.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.041 "dma_device_type": 2 00:10:50.041 }, 00:10:50.041 { 00:10:50.041 "dma_device_id": "system", 00:10:50.041 "dma_device_type": 1 00:10:50.041 }, 00:10:50.041 { 00:10:50.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.041 "dma_device_type": 2 00:10:50.041 } 00:10:50.041 ], 00:10:50.041 "driver_specific": { 00:10:50.041 "raid": { 00:10:50.041 "uuid": "1f538aef-d8d0-4501-8b04-27e67a78db3f", 00:10:50.041 "strip_size_kb": 64, 00:10:50.041 "state": "online", 00:10:50.041 "raid_level": "concat", 00:10:50.041 "superblock": false, 00:10:50.041 "num_base_bdevs": 3, 00:10:50.041 "num_base_bdevs_discovered": 3, 00:10:50.041 "num_base_bdevs_operational": 3, 00:10:50.041 "base_bdevs_list": [ 00:10:50.041 { 00:10:50.041 "name": "NewBaseBdev", 00:10:50.041 "uuid": "b88cafdd-d51b-45fb-ac0c-73d8154609e1", 00:10:50.041 "is_configured": true, 00:10:50.041 "data_offset": 0, 00:10:50.041 "data_size": 65536 00:10:50.041 }, 00:10:50.041 { 00:10:50.041 "name": "BaseBdev2", 00:10:50.041 "uuid": "0a799778-8840-4a55-9ee7-39b86e305b31", 00:10:50.041 "is_configured": true, 00:10:50.041 "data_offset": 0, 00:10:50.041 "data_size": 65536 00:10:50.041 }, 00:10:50.041 { 00:10:50.041 "name": "BaseBdev3", 00:10:50.041 "uuid": "d476f73b-eb37-4891-919a-23a0d10e8f23", 00:10:50.041 "is_configured": true, 00:10:50.041 "data_offset": 0, 00:10:50.041 "data_size": 65536 00:10:50.041 } 00:10:50.041 ] 00:10:50.041 } 00:10:50.041 } 00:10:50.041 }' 00:10:50.041 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.041 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:50.041 BaseBdev2 00:10:50.041 BaseBdev3' 00:10:50.041 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:50.041 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:50.041 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:50.398 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:50.398 "name": "NewBaseBdev", 00:10:50.398 "aliases": [ 00:10:50.398 "b88cafdd-d51b-45fb-ac0c-73d8154609e1" 00:10:50.398 ], 00:10:50.398 "product_name": "Malloc disk", 00:10:50.398 "block_size": 512, 00:10:50.398 "num_blocks": 65536, 00:10:50.398 "uuid": "b88cafdd-d51b-45fb-ac0c-73d8154609e1", 00:10:50.398 "assigned_rate_limits": { 00:10:50.398 "rw_ios_per_sec": 0, 00:10:50.398 "rw_mbytes_per_sec": 0, 00:10:50.398 "r_mbytes_per_sec": 0, 00:10:50.398 "w_mbytes_per_sec": 0 00:10:50.398 }, 00:10:50.398 "claimed": true, 00:10:50.399 "claim_type": "exclusive_write", 00:10:50.399 "zoned": false, 00:10:50.399 "supported_io_types": { 00:10:50.399 "read": true, 00:10:50.399 "write": true, 00:10:50.399 "unmap": true, 00:10:50.399 "flush": true, 00:10:50.399 "reset": true, 00:10:50.399 "nvme_admin": false, 00:10:50.399 "nvme_io": false, 00:10:50.399 "nvme_io_md": false, 00:10:50.399 "write_zeroes": true, 00:10:50.399 "zcopy": true, 00:10:50.399 "get_zone_info": false, 00:10:50.399 "zone_management": false, 00:10:50.399 "zone_append": false, 00:10:50.399 "compare": false, 00:10:50.399 "compare_and_write": false, 00:10:50.399 "abort": true, 00:10:50.399 "seek_hole": false, 00:10:50.399 "seek_data": false, 00:10:50.399 "copy": true, 00:10:50.399 "nvme_iov_md": false 00:10:50.399 }, 00:10:50.399 "memory_domains": [ 00:10:50.399 { 00:10:50.399 "dma_device_id": "system", 00:10:50.399 "dma_device_type": 1 00:10:50.399 }, 00:10:50.399 { 00:10:50.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.399 "dma_device_type": 2 00:10:50.399 } 00:10:50.399 ], 00:10:50.399 "driver_specific": {} 00:10:50.399 }' 00:10:50.399 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:50.399 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:50.399 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:50.399 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:50.399 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:50.662 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:50.662 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:50.662 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:50.662 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:50.662 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:50.662 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:50.662 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:50.662 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:50.662 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:50.662 06:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:50.923 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:50.923 "name": "BaseBdev2", 00:10:50.923 "aliases": [ 00:10:50.923 "0a799778-8840-4a55-9ee7-39b86e305b31" 00:10:50.923 ], 00:10:50.923 "product_name": "Malloc disk", 00:10:50.923 "block_size": 512, 00:10:50.923 "num_blocks": 65536, 00:10:50.923 "uuid": "0a799778-8840-4a55-9ee7-39b86e305b31", 00:10:50.923 "assigned_rate_limits": { 00:10:50.923 "rw_ios_per_sec": 0, 00:10:50.923 "rw_mbytes_per_sec": 0, 00:10:50.923 "r_mbytes_per_sec": 0, 00:10:50.923 "w_mbytes_per_sec": 0 00:10:50.923 }, 00:10:50.923 "claimed": true, 00:10:50.923 "claim_type": "exclusive_write", 00:10:50.923 "zoned": false, 00:10:50.923 "supported_io_types": { 00:10:50.923 "read": true, 00:10:50.923 "write": true, 00:10:50.923 "unmap": true, 00:10:50.923 "flush": true, 00:10:50.923 "reset": true, 00:10:50.923 "nvme_admin": false, 00:10:50.923 "nvme_io": false, 00:10:50.923 "nvme_io_md": false, 00:10:50.923 "write_zeroes": true, 00:10:50.923 "zcopy": true, 00:10:50.923 "get_zone_info": false, 00:10:50.923 "zone_management": false, 00:10:50.923 "zone_append": false, 00:10:50.923 "compare": false, 00:10:50.923 "compare_and_write": false, 00:10:50.923 "abort": true, 00:10:50.923 "seek_hole": false, 00:10:50.923 "seek_data": false, 00:10:50.923 "copy": true, 00:10:50.923 "nvme_iov_md": false 00:10:50.923 }, 00:10:50.923 "memory_domains": [ 00:10:50.923 { 00:10:50.923 "dma_device_id": "system", 00:10:50.923 "dma_device_type": 1 00:10:50.923 }, 00:10:50.923 { 00:10:50.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.923 "dma_device_type": 2 00:10:50.923 } 00:10:50.923 ], 00:10:50.923 "driver_specific": {} 00:10:50.923 }' 00:10:50.923 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:50.923 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:50.923 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:50.923 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:50.923 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:51.208 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:51.208 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:51.208 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:51.208 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:51.208 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:51.208 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:51.208 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:51.208 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:51.208 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:51.208 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:51.473 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:51.473 "name": "BaseBdev3", 00:10:51.473 "aliases": [ 00:10:51.473 "d476f73b-eb37-4891-919a-23a0d10e8f23" 00:10:51.473 ], 00:10:51.473 "product_name": "Malloc disk", 00:10:51.473 "block_size": 512, 00:10:51.473 "num_blocks": 65536, 00:10:51.473 "uuid": "d476f73b-eb37-4891-919a-23a0d10e8f23", 00:10:51.473 "assigned_rate_limits": { 00:10:51.473 "rw_ios_per_sec": 0, 00:10:51.473 "rw_mbytes_per_sec": 0, 00:10:51.473 "r_mbytes_per_sec": 0, 00:10:51.473 "w_mbytes_per_sec": 0 00:10:51.473 }, 00:10:51.473 "claimed": true, 00:10:51.473 "claim_type": "exclusive_write", 00:10:51.473 "zoned": false, 00:10:51.473 "supported_io_types": { 00:10:51.473 "read": true, 00:10:51.473 "write": true, 00:10:51.473 "unmap": true, 00:10:51.473 "flush": true, 00:10:51.473 "reset": true, 00:10:51.473 "nvme_admin": false, 00:10:51.473 "nvme_io": false, 00:10:51.473 "nvme_io_md": false, 00:10:51.473 "write_zeroes": true, 00:10:51.473 "zcopy": true, 00:10:51.473 "get_zone_info": false, 00:10:51.473 "zone_management": false, 00:10:51.473 "zone_append": false, 00:10:51.473 "compare": false, 00:10:51.473 "compare_and_write": false, 00:10:51.473 "abort": true, 00:10:51.473 "seek_hole": false, 00:10:51.473 "seek_data": false, 00:10:51.473 "copy": true, 00:10:51.473 "nvme_iov_md": false 00:10:51.473 }, 00:10:51.473 "memory_domains": [ 00:10:51.473 { 00:10:51.473 "dma_device_id": "system", 00:10:51.473 "dma_device_type": 1 00:10:51.473 }, 00:10:51.473 { 00:10:51.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.473 "dma_device_type": 2 00:10:51.473 } 00:10:51.473 ], 00:10:51.473 "driver_specific": {} 00:10:51.473 }' 00:10:51.473 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:51.473 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:51.473 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:51.473 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:51.738 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:51.738 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:51.738 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:51.738 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:51.738 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:51.738 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:51.738 06:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:52.002 06:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:52.002 06:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:52.002 [2024-08-14 06:42:19.240021] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.002 [2024-08-14 06:42:19.240063] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.002 [2024-08-14 06:42:19.240145] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.002 [2024-08-14 06:42:19.240221] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.002 [2024-08-14 06:42:19.240232] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:52.268 06:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 78041 00:10:52.268 06:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 78041 ']' 00:10:52.268 06:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 78041 00:10:52.268 06:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:10:52.268 06:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:52.268 06:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78041 00:10:52.268 killing process with pid 78041 00:10:52.268 06:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:52.268 06:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:52.268 06:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78041' 00:10:52.268 06:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 78041 00:10:52.268 06:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 78041 00:10:52.268 [2024-08-14 06:42:19.313066] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:52.268 [2024-08-14 06:42:19.345216] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:52.531 00:10:52.531 real 0m27.348s 00:10:52.531 user 0m51.089s 00:10:52.531 sys 0m3.909s 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:52.531 ************************************ 00:10:52.531 END TEST raid_state_function_test 00:10:52.531 ************************************ 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.531 06:42:19 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:52.531 06:42:19 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:52.531 06:42:19 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:52.531 06:42:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.531 ************************************ 00:10:52.531 START TEST raid_state_function_test_sb 00:10:52.531 ************************************ 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 true 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=78962 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 78962' 00:10:52.531 Process raid pid: 78962 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 78962 /var/tmp/spdk-raid.sock 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 78962 ']' 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:52.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:52.531 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.531 [2024-08-14 06:42:19.747792] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:10:52.531 [2024-08-14 06:42:19.748022] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.792 [2024-08-14 06:42:19.896729] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.792 [2024-08-14 06:42:19.947505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.792 [2024-08-14 06:42:19.991136] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.792 [2024-08-14 06:42:19.991166] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.729 06:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:53.729 06:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:10:53.729 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:53.729 [2024-08-14 06:42:20.823839] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.729 [2024-08-14 06:42:20.823913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.729 [2024-08-14 06:42:20.823926] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.729 [2024-08-14 06:42:20.823935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.729 [2024-08-14 06:42:20.823946] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.729 [2024-08-14 06:42:20.823954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.729 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:53.729 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:53.729 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:53.729 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:53.729 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:53.729 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:53.729 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:53.729 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:53.729 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:53.729 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:53.729 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.729 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.988 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:53.988 "name": "Existed_Raid", 00:10:53.988 "uuid": "379e60ef-4e2a-402f-bfe2-225dc5f62121", 00:10:53.988 "strip_size_kb": 64, 00:10:53.988 "state": "configuring", 00:10:53.988 "raid_level": "concat", 00:10:53.988 "superblock": true, 00:10:53.988 "num_base_bdevs": 3, 00:10:53.988 "num_base_bdevs_discovered": 0, 00:10:53.988 "num_base_bdevs_operational": 3, 00:10:53.988 "base_bdevs_list": [ 00:10:53.988 { 00:10:53.988 "name": "BaseBdev1", 00:10:53.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.988 "is_configured": false, 00:10:53.988 "data_offset": 0, 00:10:53.988 "data_size": 0 00:10:53.988 }, 00:10:53.988 { 00:10:53.988 "name": "BaseBdev2", 00:10:53.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.988 "is_configured": false, 00:10:53.988 "data_offset": 0, 00:10:53.988 "data_size": 0 00:10:53.988 }, 00:10:53.988 { 00:10:53.988 "name": "BaseBdev3", 00:10:53.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.988 "is_configured": false, 00:10:53.988 "data_offset": 0, 00:10:53.988 "data_size": 0 00:10:53.988 } 00:10:53.988 ] 00:10:53.988 }' 00:10:53.988 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:53.988 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.556 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:54.815 [2024-08-14 06:42:21.833952] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.815 [2024-08-14 06:42:21.834075] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:54.815 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:54.815 [2024-08-14 06:42:22.061598] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.815 [2024-08-14 06:42:22.061741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.815 [2024-08-14 06:42:22.061778] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.815 [2024-08-14 06:42:22.061803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.815 [2024-08-14 06:42:22.061826] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.815 [2024-08-14 06:42:22.061867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.075 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:55.075 [2024-08-14 06:42:22.286433] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.075 BaseBdev1 00:10:55.075 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:55.075 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:10:55.075 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:55.075 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:55.075 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:55.075 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:55.075 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:55.334 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:55.594 [ 00:10:55.594 { 00:10:55.594 "name": "BaseBdev1", 00:10:55.594 "aliases": [ 00:10:55.594 "4b0729c7-83f7-4de0-9818-5cda0d6f4548" 00:10:55.594 ], 00:10:55.594 "product_name": "Malloc disk", 00:10:55.594 "block_size": 512, 00:10:55.594 "num_blocks": 65536, 00:10:55.594 "uuid": "4b0729c7-83f7-4de0-9818-5cda0d6f4548", 00:10:55.594 "assigned_rate_limits": { 00:10:55.594 "rw_ios_per_sec": 0, 00:10:55.594 "rw_mbytes_per_sec": 0, 00:10:55.594 "r_mbytes_per_sec": 0, 00:10:55.594 "w_mbytes_per_sec": 0 00:10:55.594 }, 00:10:55.594 "claimed": true, 00:10:55.594 "claim_type": "exclusive_write", 00:10:55.594 "zoned": false, 00:10:55.594 "supported_io_types": { 00:10:55.594 "read": true, 00:10:55.594 "write": true, 00:10:55.594 "unmap": true, 00:10:55.594 "flush": true, 00:10:55.594 "reset": true, 00:10:55.594 "nvme_admin": false, 00:10:55.594 "nvme_io": false, 00:10:55.594 "nvme_io_md": false, 00:10:55.594 "write_zeroes": true, 00:10:55.594 "zcopy": true, 00:10:55.594 "get_zone_info": false, 00:10:55.594 "zone_management": false, 00:10:55.594 "zone_append": false, 00:10:55.594 "compare": false, 00:10:55.594 "compare_and_write": false, 00:10:55.594 "abort": true, 00:10:55.594 "seek_hole": false, 00:10:55.594 "seek_data": false, 00:10:55.594 "copy": true, 00:10:55.594 "nvme_iov_md": false 00:10:55.594 }, 00:10:55.594 "memory_domains": [ 00:10:55.594 { 00:10:55.594 "dma_device_id": "system", 00:10:55.594 "dma_device_type": 1 00:10:55.594 }, 00:10:55.594 { 00:10:55.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.594 "dma_device_type": 2 00:10:55.594 } 00:10:55.594 ], 00:10:55.594 "driver_specific": {} 00:10:55.594 } 00:10:55.594 ] 00:10:55.594 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:55.594 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:55.594 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:55.594 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:55.594 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:55.594 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:55.594 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:55.594 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:55.594 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:55.594 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:55.594 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:55.594 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.594 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.854 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:55.854 "name": "Existed_Raid", 00:10:55.854 "uuid": "8052923c-b97d-4f42-801d-42df3dd92b7f", 00:10:55.854 "strip_size_kb": 64, 00:10:55.854 "state": "configuring", 00:10:55.854 "raid_level": "concat", 00:10:55.854 "superblock": true, 00:10:55.854 "num_base_bdevs": 3, 00:10:55.854 "num_base_bdevs_discovered": 1, 00:10:55.854 "num_base_bdevs_operational": 3, 00:10:55.854 "base_bdevs_list": [ 00:10:55.854 { 00:10:55.854 "name": "BaseBdev1", 00:10:55.854 "uuid": "4b0729c7-83f7-4de0-9818-5cda0d6f4548", 00:10:55.854 "is_configured": true, 00:10:55.854 "data_offset": 2048, 00:10:55.854 "data_size": 63488 00:10:55.854 }, 00:10:55.854 { 00:10:55.854 "name": "BaseBdev2", 00:10:55.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.854 "is_configured": false, 00:10:55.854 "data_offset": 0, 00:10:55.854 "data_size": 0 00:10:55.854 }, 00:10:55.854 { 00:10:55.854 "name": "BaseBdev3", 00:10:55.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.854 "is_configured": false, 00:10:55.854 "data_offset": 0, 00:10:55.854 "data_size": 0 00:10:55.854 } 00:10:55.854 ] 00:10:55.854 }' 00:10:55.854 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:55.854 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.422 06:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:56.684 [2024-08-14 06:42:23.784170] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.684 [2024-08-14 06:42:23.784266] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:56.684 06:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:56.945 [2024-08-14 06:42:24.011857] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.945 [2024-08-14 06:42:24.013941] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.945 [2024-08-14 06:42:24.014067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.945 [2024-08-14 06:42:24.014092] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:56.945 [2024-08-14 06:42:24.014102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:56.946 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:56.946 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:56.946 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:56.946 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:56.946 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:56.946 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:56.946 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:56.946 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:56.946 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:56.946 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:56.946 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:56.946 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:56.946 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.946 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.205 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:57.205 "name": "Existed_Raid", 00:10:57.205 "uuid": "6ed783fe-efa0-476d-b9b1-885c4baaf9de", 00:10:57.205 "strip_size_kb": 64, 00:10:57.205 "state": "configuring", 00:10:57.205 "raid_level": "concat", 00:10:57.205 "superblock": true, 00:10:57.205 "num_base_bdevs": 3, 00:10:57.205 "num_base_bdevs_discovered": 1, 00:10:57.205 "num_base_bdevs_operational": 3, 00:10:57.205 "base_bdevs_list": [ 00:10:57.205 { 00:10:57.205 "name": "BaseBdev1", 00:10:57.205 "uuid": "4b0729c7-83f7-4de0-9818-5cda0d6f4548", 00:10:57.205 "is_configured": true, 00:10:57.205 "data_offset": 2048, 00:10:57.205 "data_size": 63488 00:10:57.205 }, 00:10:57.205 { 00:10:57.205 "name": "BaseBdev2", 00:10:57.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.205 "is_configured": false, 00:10:57.205 "data_offset": 0, 00:10:57.205 "data_size": 0 00:10:57.205 }, 00:10:57.205 { 00:10:57.205 "name": "BaseBdev3", 00:10:57.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.205 "is_configured": false, 00:10:57.205 "data_offset": 0, 00:10:57.205 "data_size": 0 00:10:57.205 } 00:10:57.205 ] 00:10:57.205 }' 00:10:57.205 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:57.205 06:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.773 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:58.034 [2024-08-14 06:42:25.102978] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.034 BaseBdev2 00:10:58.034 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:58.035 06:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:10:58.035 06:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:58.035 06:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:58.035 06:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:58.035 06:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:58.035 06:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:58.319 06:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:58.633 [ 00:10:58.633 { 00:10:58.633 "name": "BaseBdev2", 00:10:58.633 "aliases": [ 00:10:58.633 "6001121f-863a-4ca8-a4b6-856159e7584a" 00:10:58.633 ], 00:10:58.633 "product_name": "Malloc disk", 00:10:58.633 "block_size": 512, 00:10:58.633 "num_blocks": 65536, 00:10:58.633 "uuid": "6001121f-863a-4ca8-a4b6-856159e7584a", 00:10:58.633 "assigned_rate_limits": { 00:10:58.633 "rw_ios_per_sec": 0, 00:10:58.633 "rw_mbytes_per_sec": 0, 00:10:58.633 "r_mbytes_per_sec": 0, 00:10:58.633 "w_mbytes_per_sec": 0 00:10:58.633 }, 00:10:58.633 "claimed": true, 00:10:58.633 "claim_type": "exclusive_write", 00:10:58.633 "zoned": false, 00:10:58.633 "supported_io_types": { 00:10:58.633 "read": true, 00:10:58.633 "write": true, 00:10:58.633 "unmap": true, 00:10:58.633 "flush": true, 00:10:58.633 "reset": true, 00:10:58.633 "nvme_admin": false, 00:10:58.633 "nvme_io": false, 00:10:58.633 "nvme_io_md": false, 00:10:58.633 "write_zeroes": true, 00:10:58.633 "zcopy": true, 00:10:58.633 "get_zone_info": false, 00:10:58.633 "zone_management": false, 00:10:58.633 "zone_append": false, 00:10:58.633 "compare": false, 00:10:58.633 "compare_and_write": false, 00:10:58.633 "abort": true, 00:10:58.633 "seek_hole": false, 00:10:58.633 "seek_data": false, 00:10:58.633 "copy": true, 00:10:58.633 "nvme_iov_md": false 00:10:58.633 }, 00:10:58.633 "memory_domains": [ 00:10:58.633 { 00:10:58.633 "dma_device_id": "system", 00:10:58.633 "dma_device_type": 1 00:10:58.633 }, 00:10:58.633 { 00:10:58.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.633 "dma_device_type": 2 00:10:58.633 } 00:10:58.633 ], 00:10:58.633 "driver_specific": {} 00:10:58.633 } 00:10:58.633 ] 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:58.633 "name": "Existed_Raid", 00:10:58.633 "uuid": "6ed783fe-efa0-476d-b9b1-885c4baaf9de", 00:10:58.633 "strip_size_kb": 64, 00:10:58.633 "state": "configuring", 00:10:58.633 "raid_level": "concat", 00:10:58.633 "superblock": true, 00:10:58.633 "num_base_bdevs": 3, 00:10:58.633 "num_base_bdevs_discovered": 2, 00:10:58.633 "num_base_bdevs_operational": 3, 00:10:58.633 "base_bdevs_list": [ 00:10:58.633 { 00:10:58.633 "name": "BaseBdev1", 00:10:58.633 "uuid": "4b0729c7-83f7-4de0-9818-5cda0d6f4548", 00:10:58.633 "is_configured": true, 00:10:58.633 "data_offset": 2048, 00:10:58.633 "data_size": 63488 00:10:58.633 }, 00:10:58.633 { 00:10:58.633 "name": "BaseBdev2", 00:10:58.633 "uuid": "6001121f-863a-4ca8-a4b6-856159e7584a", 00:10:58.633 "is_configured": true, 00:10:58.633 "data_offset": 2048, 00:10:58.633 "data_size": 63488 00:10:58.633 }, 00:10:58.633 { 00:10:58.633 "name": "BaseBdev3", 00:10:58.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.633 "is_configured": false, 00:10:58.633 "data_offset": 0, 00:10:58.633 "data_size": 0 00:10:58.633 } 00:10:58.633 ] 00:10:58.633 }' 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:58.633 06:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.205 06:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:59.465 [2024-08-14 06:42:26.559969] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.465 [2024-08-14 06:42:26.560323] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:59.465 [2024-08-14 06:42:26.560385] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:59.465 [2024-08-14 06:42:26.560758] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:59.465 [2024-08-14 06:42:26.560946] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:59.465 [2024-08-14 06:42:26.560998] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:59.465 [2024-08-14 06:42:26.561214] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.465 BaseBdev3 00:10:59.465 06:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:59.465 06:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:10:59.465 06:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:59.465 06:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:59.465 06:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:59.465 06:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:59.465 06:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:59.725 06:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:59.985 [ 00:10:59.985 { 00:10:59.985 "name": "BaseBdev3", 00:10:59.985 "aliases": [ 00:10:59.985 "69445559-9ca0-4a12-bfb8-6b4ed03249e9" 00:10:59.985 ], 00:10:59.985 "product_name": "Malloc disk", 00:10:59.985 "block_size": 512, 00:10:59.985 "num_blocks": 65536, 00:10:59.985 "uuid": "69445559-9ca0-4a12-bfb8-6b4ed03249e9", 00:10:59.985 "assigned_rate_limits": { 00:10:59.985 "rw_ios_per_sec": 0, 00:10:59.985 "rw_mbytes_per_sec": 0, 00:10:59.985 "r_mbytes_per_sec": 0, 00:10:59.985 "w_mbytes_per_sec": 0 00:10:59.985 }, 00:10:59.985 "claimed": true, 00:10:59.985 "claim_type": "exclusive_write", 00:10:59.985 "zoned": false, 00:10:59.985 "supported_io_types": { 00:10:59.985 "read": true, 00:10:59.985 "write": true, 00:10:59.985 "unmap": true, 00:10:59.985 "flush": true, 00:10:59.985 "reset": true, 00:10:59.985 "nvme_admin": false, 00:10:59.985 "nvme_io": false, 00:10:59.985 "nvme_io_md": false, 00:10:59.985 "write_zeroes": true, 00:10:59.985 "zcopy": true, 00:10:59.985 "get_zone_info": false, 00:10:59.985 "zone_management": false, 00:10:59.985 "zone_append": false, 00:10:59.985 "compare": false, 00:10:59.985 "compare_and_write": false, 00:10:59.985 "abort": true, 00:10:59.985 "seek_hole": false, 00:10:59.985 "seek_data": false, 00:10:59.985 "copy": true, 00:10:59.985 "nvme_iov_md": false 00:10:59.985 }, 00:10:59.985 "memory_domains": [ 00:10:59.985 { 00:10:59.985 "dma_device_id": "system", 00:10:59.985 "dma_device_type": 1 00:10:59.985 }, 00:10:59.985 { 00:10:59.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.985 "dma_device_type": 2 00:10:59.985 } 00:10:59.985 ], 00:10:59.985 "driver_specific": {} 00:10:59.985 } 00:10:59.985 ] 00:10:59.985 06:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:59.985 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:59.985 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:59.985 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:59.985 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:59.985 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:59.985 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:59.985 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:59.985 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:59.985 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:59.985 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:59.985 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:59.985 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:59.985 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.985 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.245 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:00.245 "name": "Existed_Raid", 00:11:00.245 "uuid": "6ed783fe-efa0-476d-b9b1-885c4baaf9de", 00:11:00.245 "strip_size_kb": 64, 00:11:00.245 "state": "online", 00:11:00.245 "raid_level": "concat", 00:11:00.245 "superblock": true, 00:11:00.245 "num_base_bdevs": 3, 00:11:00.245 "num_base_bdevs_discovered": 3, 00:11:00.245 "num_base_bdevs_operational": 3, 00:11:00.245 "base_bdevs_list": [ 00:11:00.245 { 00:11:00.245 "name": "BaseBdev1", 00:11:00.245 "uuid": "4b0729c7-83f7-4de0-9818-5cda0d6f4548", 00:11:00.245 "is_configured": true, 00:11:00.245 "data_offset": 2048, 00:11:00.245 "data_size": 63488 00:11:00.245 }, 00:11:00.245 { 00:11:00.245 "name": "BaseBdev2", 00:11:00.245 "uuid": "6001121f-863a-4ca8-a4b6-856159e7584a", 00:11:00.245 "is_configured": true, 00:11:00.245 "data_offset": 2048, 00:11:00.245 "data_size": 63488 00:11:00.245 }, 00:11:00.245 { 00:11:00.245 "name": "BaseBdev3", 00:11:00.245 "uuid": "69445559-9ca0-4a12-bfb8-6b4ed03249e9", 00:11:00.245 "is_configured": true, 00:11:00.245 "data_offset": 2048, 00:11:00.245 "data_size": 63488 00:11:00.245 } 00:11:00.245 ] 00:11:00.245 }' 00:11:00.245 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:00.245 06:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.815 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:00.815 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:00.815 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:00.815 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:00.815 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:00.815 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:00.815 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:00.815 06:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:01.075 [2024-08-14 06:42:28.077872] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.075 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:01.075 "name": "Existed_Raid", 00:11:01.075 "aliases": [ 00:11:01.075 "6ed783fe-efa0-476d-b9b1-885c4baaf9de" 00:11:01.075 ], 00:11:01.075 "product_name": "Raid Volume", 00:11:01.075 "block_size": 512, 00:11:01.075 "num_blocks": 190464, 00:11:01.075 "uuid": "6ed783fe-efa0-476d-b9b1-885c4baaf9de", 00:11:01.075 "assigned_rate_limits": { 00:11:01.075 "rw_ios_per_sec": 0, 00:11:01.075 "rw_mbytes_per_sec": 0, 00:11:01.075 "r_mbytes_per_sec": 0, 00:11:01.075 "w_mbytes_per_sec": 0 00:11:01.075 }, 00:11:01.075 "claimed": false, 00:11:01.075 "zoned": false, 00:11:01.075 "supported_io_types": { 00:11:01.075 "read": true, 00:11:01.075 "write": true, 00:11:01.075 "unmap": true, 00:11:01.075 "flush": true, 00:11:01.075 "reset": true, 00:11:01.075 "nvme_admin": false, 00:11:01.075 "nvme_io": false, 00:11:01.075 "nvme_io_md": false, 00:11:01.075 "write_zeroes": true, 00:11:01.075 "zcopy": false, 00:11:01.075 "get_zone_info": false, 00:11:01.075 "zone_management": false, 00:11:01.075 "zone_append": false, 00:11:01.075 "compare": false, 00:11:01.075 "compare_and_write": false, 00:11:01.075 "abort": false, 00:11:01.075 "seek_hole": false, 00:11:01.075 "seek_data": false, 00:11:01.075 "copy": false, 00:11:01.075 "nvme_iov_md": false 00:11:01.075 }, 00:11:01.075 "memory_domains": [ 00:11:01.075 { 00:11:01.075 "dma_device_id": "system", 00:11:01.075 "dma_device_type": 1 00:11:01.075 }, 00:11:01.075 { 00:11:01.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.075 "dma_device_type": 2 00:11:01.075 }, 00:11:01.075 { 00:11:01.075 "dma_device_id": "system", 00:11:01.075 "dma_device_type": 1 00:11:01.075 }, 00:11:01.075 { 00:11:01.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.075 "dma_device_type": 2 00:11:01.075 }, 00:11:01.075 { 00:11:01.075 "dma_device_id": "system", 00:11:01.075 "dma_device_type": 1 00:11:01.075 }, 00:11:01.075 { 00:11:01.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.075 "dma_device_type": 2 00:11:01.075 } 00:11:01.075 ], 00:11:01.075 "driver_specific": { 00:11:01.075 "raid": { 00:11:01.075 "uuid": "6ed783fe-efa0-476d-b9b1-885c4baaf9de", 00:11:01.075 "strip_size_kb": 64, 00:11:01.075 "state": "online", 00:11:01.075 "raid_level": "concat", 00:11:01.075 "superblock": true, 00:11:01.075 "num_base_bdevs": 3, 00:11:01.075 "num_base_bdevs_discovered": 3, 00:11:01.075 "num_base_bdevs_operational": 3, 00:11:01.075 "base_bdevs_list": [ 00:11:01.076 { 00:11:01.076 "name": "BaseBdev1", 00:11:01.076 "uuid": "4b0729c7-83f7-4de0-9818-5cda0d6f4548", 00:11:01.076 "is_configured": true, 00:11:01.076 "data_offset": 2048, 00:11:01.076 "data_size": 63488 00:11:01.076 }, 00:11:01.076 { 00:11:01.076 "name": "BaseBdev2", 00:11:01.076 "uuid": "6001121f-863a-4ca8-a4b6-856159e7584a", 00:11:01.076 "is_configured": true, 00:11:01.076 "data_offset": 2048, 00:11:01.076 "data_size": 63488 00:11:01.076 }, 00:11:01.076 { 00:11:01.076 "name": "BaseBdev3", 00:11:01.076 "uuid": "69445559-9ca0-4a12-bfb8-6b4ed03249e9", 00:11:01.076 "is_configured": true, 00:11:01.076 "data_offset": 2048, 00:11:01.076 "data_size": 63488 00:11:01.076 } 00:11:01.076 ] 00:11:01.076 } 00:11:01.076 } 00:11:01.076 }' 00:11:01.076 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.076 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:01.076 BaseBdev2 00:11:01.076 BaseBdev3' 00:11:01.076 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:01.076 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:01.076 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:01.336 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:01.336 "name": "BaseBdev1", 00:11:01.336 "aliases": [ 00:11:01.336 "4b0729c7-83f7-4de0-9818-5cda0d6f4548" 00:11:01.336 ], 00:11:01.336 "product_name": "Malloc disk", 00:11:01.336 "block_size": 512, 00:11:01.336 "num_blocks": 65536, 00:11:01.336 "uuid": "4b0729c7-83f7-4de0-9818-5cda0d6f4548", 00:11:01.336 "assigned_rate_limits": { 00:11:01.336 "rw_ios_per_sec": 0, 00:11:01.336 "rw_mbytes_per_sec": 0, 00:11:01.336 "r_mbytes_per_sec": 0, 00:11:01.336 "w_mbytes_per_sec": 0 00:11:01.336 }, 00:11:01.336 "claimed": true, 00:11:01.336 "claim_type": "exclusive_write", 00:11:01.336 "zoned": false, 00:11:01.336 "supported_io_types": { 00:11:01.336 "read": true, 00:11:01.336 "write": true, 00:11:01.336 "unmap": true, 00:11:01.336 "flush": true, 00:11:01.336 "reset": true, 00:11:01.336 "nvme_admin": false, 00:11:01.336 "nvme_io": false, 00:11:01.336 "nvme_io_md": false, 00:11:01.336 "write_zeroes": true, 00:11:01.336 "zcopy": true, 00:11:01.336 "get_zone_info": false, 00:11:01.336 "zone_management": false, 00:11:01.336 "zone_append": false, 00:11:01.336 "compare": false, 00:11:01.336 "compare_and_write": false, 00:11:01.336 "abort": true, 00:11:01.336 "seek_hole": false, 00:11:01.336 "seek_data": false, 00:11:01.336 "copy": true, 00:11:01.336 "nvme_iov_md": false 00:11:01.336 }, 00:11:01.336 "memory_domains": [ 00:11:01.336 { 00:11:01.336 "dma_device_id": "system", 00:11:01.336 "dma_device_type": 1 00:11:01.336 }, 00:11:01.336 { 00:11:01.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.336 "dma_device_type": 2 00:11:01.336 } 00:11:01.336 ], 00:11:01.336 "driver_specific": {} 00:11:01.336 }' 00:11:01.336 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.336 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.336 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:01.336 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.336 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.336 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:01.336 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.596 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.596 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:01.596 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.596 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.596 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:01.596 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:01.597 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:01.597 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:01.856 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:01.856 "name": "BaseBdev2", 00:11:01.856 "aliases": [ 00:11:01.856 "6001121f-863a-4ca8-a4b6-856159e7584a" 00:11:01.856 ], 00:11:01.856 "product_name": "Malloc disk", 00:11:01.856 "block_size": 512, 00:11:01.856 "num_blocks": 65536, 00:11:01.856 "uuid": "6001121f-863a-4ca8-a4b6-856159e7584a", 00:11:01.856 "assigned_rate_limits": { 00:11:01.856 "rw_ios_per_sec": 0, 00:11:01.856 "rw_mbytes_per_sec": 0, 00:11:01.856 "r_mbytes_per_sec": 0, 00:11:01.856 "w_mbytes_per_sec": 0 00:11:01.856 }, 00:11:01.856 "claimed": true, 00:11:01.856 "claim_type": "exclusive_write", 00:11:01.856 "zoned": false, 00:11:01.856 "supported_io_types": { 00:11:01.856 "read": true, 00:11:01.856 "write": true, 00:11:01.856 "unmap": true, 00:11:01.856 "flush": true, 00:11:01.856 "reset": true, 00:11:01.856 "nvme_admin": false, 00:11:01.856 "nvme_io": false, 00:11:01.856 "nvme_io_md": false, 00:11:01.856 "write_zeroes": true, 00:11:01.856 "zcopy": true, 00:11:01.856 "get_zone_info": false, 00:11:01.856 "zone_management": false, 00:11:01.856 "zone_append": false, 00:11:01.856 "compare": false, 00:11:01.856 "compare_and_write": false, 00:11:01.856 "abort": true, 00:11:01.856 "seek_hole": false, 00:11:01.856 "seek_data": false, 00:11:01.856 "copy": true, 00:11:01.856 "nvme_iov_md": false 00:11:01.856 }, 00:11:01.856 "memory_domains": [ 00:11:01.856 { 00:11:01.856 "dma_device_id": "system", 00:11:01.856 "dma_device_type": 1 00:11:01.856 }, 00:11:01.856 { 00:11:01.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.856 "dma_device_type": 2 00:11:01.856 } 00:11:01.856 ], 00:11:01.856 "driver_specific": {} 00:11:01.856 }' 00:11:01.856 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.856 06:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.856 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:01.856 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.856 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:02.115 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:02.115 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:02.115 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:02.115 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:02.115 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:02.115 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:02.115 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:02.115 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:02.115 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:02.115 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:02.375 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:02.375 "name": "BaseBdev3", 00:11:02.375 "aliases": [ 00:11:02.375 "69445559-9ca0-4a12-bfb8-6b4ed03249e9" 00:11:02.375 ], 00:11:02.375 "product_name": "Malloc disk", 00:11:02.375 "block_size": 512, 00:11:02.375 "num_blocks": 65536, 00:11:02.375 "uuid": "69445559-9ca0-4a12-bfb8-6b4ed03249e9", 00:11:02.375 "assigned_rate_limits": { 00:11:02.375 "rw_ios_per_sec": 0, 00:11:02.375 "rw_mbytes_per_sec": 0, 00:11:02.375 "r_mbytes_per_sec": 0, 00:11:02.375 "w_mbytes_per_sec": 0 00:11:02.375 }, 00:11:02.375 "claimed": true, 00:11:02.375 "claim_type": "exclusive_write", 00:11:02.375 "zoned": false, 00:11:02.375 "supported_io_types": { 00:11:02.375 "read": true, 00:11:02.375 "write": true, 00:11:02.375 "unmap": true, 00:11:02.375 "flush": true, 00:11:02.375 "reset": true, 00:11:02.375 "nvme_admin": false, 00:11:02.375 "nvme_io": false, 00:11:02.375 "nvme_io_md": false, 00:11:02.375 "write_zeroes": true, 00:11:02.375 "zcopy": true, 00:11:02.375 "get_zone_info": false, 00:11:02.375 "zone_management": false, 00:11:02.375 "zone_append": false, 00:11:02.375 "compare": false, 00:11:02.375 "compare_and_write": false, 00:11:02.375 "abort": true, 00:11:02.375 "seek_hole": false, 00:11:02.375 "seek_data": false, 00:11:02.375 "copy": true, 00:11:02.375 "nvme_iov_md": false 00:11:02.375 }, 00:11:02.375 "memory_domains": [ 00:11:02.375 { 00:11:02.375 "dma_device_id": "system", 00:11:02.375 "dma_device_type": 1 00:11:02.375 }, 00:11:02.375 { 00:11:02.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.375 "dma_device_type": 2 00:11:02.375 } 00:11:02.375 ], 00:11:02.375 "driver_specific": {} 00:11:02.375 }' 00:11:02.375 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:02.375 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:02.375 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:02.375 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:02.635 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:02.635 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:02.635 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:02.635 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:02.635 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:02.635 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:02.635 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:02.635 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:02.635 06:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:02.894 [2024-08-14 06:42:30.090332] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:02.894 [2024-08-14 06:42:30.090463] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.894 [2024-08-14 06:42:30.090530] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.894 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.154 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:03.154 "name": "Existed_Raid", 00:11:03.154 "uuid": "6ed783fe-efa0-476d-b9b1-885c4baaf9de", 00:11:03.154 "strip_size_kb": 64, 00:11:03.154 "state": "offline", 00:11:03.154 "raid_level": "concat", 00:11:03.154 "superblock": true, 00:11:03.154 "num_base_bdevs": 3, 00:11:03.154 "num_base_bdevs_discovered": 2, 00:11:03.154 "num_base_bdevs_operational": 2, 00:11:03.154 "base_bdevs_list": [ 00:11:03.154 { 00:11:03.154 "name": null, 00:11:03.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.154 "is_configured": false, 00:11:03.154 "data_offset": 2048, 00:11:03.154 "data_size": 63488 00:11:03.154 }, 00:11:03.154 { 00:11:03.154 "name": "BaseBdev2", 00:11:03.154 "uuid": "6001121f-863a-4ca8-a4b6-856159e7584a", 00:11:03.154 "is_configured": true, 00:11:03.154 "data_offset": 2048, 00:11:03.154 "data_size": 63488 00:11:03.154 }, 00:11:03.154 { 00:11:03.154 "name": "BaseBdev3", 00:11:03.154 "uuid": "69445559-9ca0-4a12-bfb8-6b4ed03249e9", 00:11:03.154 "is_configured": true, 00:11:03.154 "data_offset": 2048, 00:11:03.154 "data_size": 63488 00:11:03.154 } 00:11:03.154 ] 00:11:03.154 }' 00:11:03.154 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:03.154 06:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.735 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:03.735 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:03.736 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.736 06:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:03.995 06:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:03.995 06:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:03.995 06:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:04.255 [2024-08-14 06:42:31.340329] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:04.255 06:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:04.255 06:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:04.255 06:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:04.255 06:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:04.515 06:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:04.515 06:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:04.515 06:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:04.775 [2024-08-14 06:42:31.819340] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:04.775 [2024-08-14 06:42:31.819405] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:11:04.775 06:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:04.775 06:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:04.775 06:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:04.775 06:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:05.034 06:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:05.034 06:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:05.034 06:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:05.034 06:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:05.034 06:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:05.034 06:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.293 BaseBdev2 00:11:05.293 06:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:05.293 06:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:11:05.293 06:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:05.293 06:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:05.293 06:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:05.293 06:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:05.293 06:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:05.293 06:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.552 [ 00:11:05.552 { 00:11:05.552 "name": "BaseBdev2", 00:11:05.552 "aliases": [ 00:11:05.552 "20604fcb-f3f2-4734-a427-806b6f480bef" 00:11:05.552 ], 00:11:05.552 "product_name": "Malloc disk", 00:11:05.552 "block_size": 512, 00:11:05.552 "num_blocks": 65536, 00:11:05.552 "uuid": "20604fcb-f3f2-4734-a427-806b6f480bef", 00:11:05.552 "assigned_rate_limits": { 00:11:05.552 "rw_ios_per_sec": 0, 00:11:05.552 "rw_mbytes_per_sec": 0, 00:11:05.552 "r_mbytes_per_sec": 0, 00:11:05.552 "w_mbytes_per_sec": 0 00:11:05.552 }, 00:11:05.552 "claimed": false, 00:11:05.552 "zoned": false, 00:11:05.552 "supported_io_types": { 00:11:05.552 "read": true, 00:11:05.552 "write": true, 00:11:05.552 "unmap": true, 00:11:05.552 "flush": true, 00:11:05.552 "reset": true, 00:11:05.552 "nvme_admin": false, 00:11:05.552 "nvme_io": false, 00:11:05.552 "nvme_io_md": false, 00:11:05.552 "write_zeroes": true, 00:11:05.552 "zcopy": true, 00:11:05.552 "get_zone_info": false, 00:11:05.552 "zone_management": false, 00:11:05.552 "zone_append": false, 00:11:05.552 "compare": false, 00:11:05.552 "compare_and_write": false, 00:11:05.552 "abort": true, 00:11:05.552 "seek_hole": false, 00:11:05.552 "seek_data": false, 00:11:05.552 "copy": true, 00:11:05.552 "nvme_iov_md": false 00:11:05.552 }, 00:11:05.552 "memory_domains": [ 00:11:05.552 { 00:11:05.552 "dma_device_id": "system", 00:11:05.552 "dma_device_type": 1 00:11:05.552 }, 00:11:05.552 { 00:11:05.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.552 "dma_device_type": 2 00:11:05.552 } 00:11:05.552 ], 00:11:05.552 "driver_specific": {} 00:11:05.552 } 00:11:05.552 ] 00:11:05.552 06:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:05.552 06:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:05.552 06:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:05.552 06:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:05.811 BaseBdev3 00:11:05.811 06:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:05.811 06:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:11:05.811 06:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:05.811 06:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:05.811 06:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:05.811 06:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:05.811 06:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:06.069 06:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.327 [ 00:11:06.327 { 00:11:06.327 "name": "BaseBdev3", 00:11:06.327 "aliases": [ 00:11:06.327 "e483dc43-0d90-4d5a-876c-0d0d6ae4978b" 00:11:06.327 ], 00:11:06.327 "product_name": "Malloc disk", 00:11:06.327 "block_size": 512, 00:11:06.327 "num_blocks": 65536, 00:11:06.327 "uuid": "e483dc43-0d90-4d5a-876c-0d0d6ae4978b", 00:11:06.327 "assigned_rate_limits": { 00:11:06.327 "rw_ios_per_sec": 0, 00:11:06.327 "rw_mbytes_per_sec": 0, 00:11:06.327 "r_mbytes_per_sec": 0, 00:11:06.327 "w_mbytes_per_sec": 0 00:11:06.327 }, 00:11:06.327 "claimed": false, 00:11:06.327 "zoned": false, 00:11:06.327 "supported_io_types": { 00:11:06.327 "read": true, 00:11:06.327 "write": true, 00:11:06.327 "unmap": true, 00:11:06.327 "flush": true, 00:11:06.327 "reset": true, 00:11:06.327 "nvme_admin": false, 00:11:06.327 "nvme_io": false, 00:11:06.327 "nvme_io_md": false, 00:11:06.327 "write_zeroes": true, 00:11:06.327 "zcopy": true, 00:11:06.327 "get_zone_info": false, 00:11:06.327 "zone_management": false, 00:11:06.327 "zone_append": false, 00:11:06.327 "compare": false, 00:11:06.327 "compare_and_write": false, 00:11:06.327 "abort": true, 00:11:06.327 "seek_hole": false, 00:11:06.327 "seek_data": false, 00:11:06.327 "copy": true, 00:11:06.327 "nvme_iov_md": false 00:11:06.327 }, 00:11:06.327 "memory_domains": [ 00:11:06.327 { 00:11:06.327 "dma_device_id": "system", 00:11:06.327 "dma_device_type": 1 00:11:06.327 }, 00:11:06.327 { 00:11:06.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.327 "dma_device_type": 2 00:11:06.327 } 00:11:06.327 ], 00:11:06.327 "driver_specific": {} 00:11:06.327 } 00:11:06.327 ] 00:11:06.327 06:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:06.327 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:06.327 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:06.327 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:06.586 [2024-08-14 06:42:33.594178] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.586 [2024-08-14 06:42:33.594260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.586 [2024-08-14 06:42:33.594286] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.586 [2024-08-14 06:42:33.596309] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.586 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:06.586 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:06.586 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:06.586 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:06.586 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:06.586 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:06.586 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:06.586 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:06.586 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:06.586 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:06.586 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:06.586 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.844 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:06.844 "name": "Existed_Raid", 00:11:06.844 "uuid": "0e4f95e5-44e8-40b6-9a02-9b238771866e", 00:11:06.844 "strip_size_kb": 64, 00:11:06.844 "state": "configuring", 00:11:06.844 "raid_level": "concat", 00:11:06.844 "superblock": true, 00:11:06.844 "num_base_bdevs": 3, 00:11:06.844 "num_base_bdevs_discovered": 2, 00:11:06.844 "num_base_bdevs_operational": 3, 00:11:06.844 "base_bdevs_list": [ 00:11:06.844 { 00:11:06.844 "name": "BaseBdev1", 00:11:06.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.844 "is_configured": false, 00:11:06.844 "data_offset": 0, 00:11:06.844 "data_size": 0 00:11:06.844 }, 00:11:06.844 { 00:11:06.844 "name": "BaseBdev2", 00:11:06.844 "uuid": "20604fcb-f3f2-4734-a427-806b6f480bef", 00:11:06.844 "is_configured": true, 00:11:06.844 "data_offset": 2048, 00:11:06.844 "data_size": 63488 00:11:06.844 }, 00:11:06.844 { 00:11:06.844 "name": "BaseBdev3", 00:11:06.844 "uuid": "e483dc43-0d90-4d5a-876c-0d0d6ae4978b", 00:11:06.844 "is_configured": true, 00:11:06.844 "data_offset": 2048, 00:11:06.844 "data_size": 63488 00:11:06.844 } 00:11:06.844 ] 00:11:06.844 }' 00:11:06.844 06:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:06.844 06:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.412 06:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:07.412 [2024-08-14 06:42:34.616478] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:07.412 06:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:07.412 06:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:07.412 06:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:07.412 06:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:07.412 06:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:07.412 06:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:07.412 06:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:07.412 06:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:07.412 06:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:07.412 06:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:07.412 06:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.412 06:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.670 06:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:07.670 "name": "Existed_Raid", 00:11:07.670 "uuid": "0e4f95e5-44e8-40b6-9a02-9b238771866e", 00:11:07.670 "strip_size_kb": 64, 00:11:07.670 "state": "configuring", 00:11:07.670 "raid_level": "concat", 00:11:07.670 "superblock": true, 00:11:07.670 "num_base_bdevs": 3, 00:11:07.670 "num_base_bdevs_discovered": 1, 00:11:07.670 "num_base_bdevs_operational": 3, 00:11:07.670 "base_bdevs_list": [ 00:11:07.670 { 00:11:07.670 "name": "BaseBdev1", 00:11:07.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.670 "is_configured": false, 00:11:07.670 "data_offset": 0, 00:11:07.670 "data_size": 0 00:11:07.670 }, 00:11:07.670 { 00:11:07.670 "name": null, 00:11:07.670 "uuid": "20604fcb-f3f2-4734-a427-806b6f480bef", 00:11:07.670 "is_configured": false, 00:11:07.670 "data_offset": 2048, 00:11:07.670 "data_size": 63488 00:11:07.670 }, 00:11:07.670 { 00:11:07.670 "name": "BaseBdev3", 00:11:07.670 "uuid": "e483dc43-0d90-4d5a-876c-0d0d6ae4978b", 00:11:07.670 "is_configured": true, 00:11:07.670 "data_offset": 2048, 00:11:07.670 "data_size": 63488 00:11:07.670 } 00:11:07.670 ] 00:11:07.670 }' 00:11:07.670 06:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:07.671 06:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.608 06:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.608 06:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:08.608 06:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:08.608 06:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:08.867 [2024-08-14 06:42:35.941373] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.867 BaseBdev1 00:11:08.867 06:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:08.867 06:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:11:08.867 06:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:08.867 06:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:08.867 06:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:08.867 06:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:08.867 06:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:09.126 06:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:09.126 [ 00:11:09.126 { 00:11:09.126 "name": "BaseBdev1", 00:11:09.126 "aliases": [ 00:11:09.126 "6145d27a-4191-4405-ae3b-54ba7688d527" 00:11:09.126 ], 00:11:09.126 "product_name": "Malloc disk", 00:11:09.126 "block_size": 512, 00:11:09.126 "num_blocks": 65536, 00:11:09.126 "uuid": "6145d27a-4191-4405-ae3b-54ba7688d527", 00:11:09.126 "assigned_rate_limits": { 00:11:09.126 "rw_ios_per_sec": 0, 00:11:09.126 "rw_mbytes_per_sec": 0, 00:11:09.126 "r_mbytes_per_sec": 0, 00:11:09.126 "w_mbytes_per_sec": 0 00:11:09.126 }, 00:11:09.126 "claimed": true, 00:11:09.126 "claim_type": "exclusive_write", 00:11:09.126 "zoned": false, 00:11:09.126 "supported_io_types": { 00:11:09.126 "read": true, 00:11:09.126 "write": true, 00:11:09.126 "unmap": true, 00:11:09.126 "flush": true, 00:11:09.126 "reset": true, 00:11:09.126 "nvme_admin": false, 00:11:09.126 "nvme_io": false, 00:11:09.126 "nvme_io_md": false, 00:11:09.126 "write_zeroes": true, 00:11:09.126 "zcopy": true, 00:11:09.126 "get_zone_info": false, 00:11:09.126 "zone_management": false, 00:11:09.126 "zone_append": false, 00:11:09.126 "compare": false, 00:11:09.126 "compare_and_write": false, 00:11:09.126 "abort": true, 00:11:09.126 "seek_hole": false, 00:11:09.126 "seek_data": false, 00:11:09.126 "copy": true, 00:11:09.126 "nvme_iov_md": false 00:11:09.126 }, 00:11:09.126 "memory_domains": [ 00:11:09.126 { 00:11:09.126 "dma_device_id": "system", 00:11:09.126 "dma_device_type": 1 00:11:09.126 }, 00:11:09.126 { 00:11:09.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.126 "dma_device_type": 2 00:11:09.126 } 00:11:09.126 ], 00:11:09.126 "driver_specific": {} 00:11:09.126 } 00:11:09.126 ] 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:09.385 "name": "Existed_Raid", 00:11:09.385 "uuid": "0e4f95e5-44e8-40b6-9a02-9b238771866e", 00:11:09.385 "strip_size_kb": 64, 00:11:09.385 "state": "configuring", 00:11:09.385 "raid_level": "concat", 00:11:09.385 "superblock": true, 00:11:09.385 "num_base_bdevs": 3, 00:11:09.385 "num_base_bdevs_discovered": 2, 00:11:09.385 "num_base_bdevs_operational": 3, 00:11:09.385 "base_bdevs_list": [ 00:11:09.385 { 00:11:09.385 "name": "BaseBdev1", 00:11:09.385 "uuid": "6145d27a-4191-4405-ae3b-54ba7688d527", 00:11:09.385 "is_configured": true, 00:11:09.385 "data_offset": 2048, 00:11:09.385 "data_size": 63488 00:11:09.385 }, 00:11:09.385 { 00:11:09.385 "name": null, 00:11:09.385 "uuid": "20604fcb-f3f2-4734-a427-806b6f480bef", 00:11:09.385 "is_configured": false, 00:11:09.385 "data_offset": 2048, 00:11:09.385 "data_size": 63488 00:11:09.385 }, 00:11:09.385 { 00:11:09.385 "name": "BaseBdev3", 00:11:09.385 "uuid": "e483dc43-0d90-4d5a-876c-0d0d6ae4978b", 00:11:09.385 "is_configured": true, 00:11:09.385 "data_offset": 2048, 00:11:09.385 "data_size": 63488 00:11:09.385 } 00:11:09.385 ] 00:11:09.385 }' 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:09.385 06:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.952 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.952 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:10.211 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:10.211 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:10.470 [2024-08-14 06:42:37.606678] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:10.470 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:10.470 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:10.471 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:10.471 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:10.471 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:10.471 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:10.471 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:10.471 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:10.471 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:10.471 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:10.471 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:10.471 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.730 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:10.730 "name": "Existed_Raid", 00:11:10.730 "uuid": "0e4f95e5-44e8-40b6-9a02-9b238771866e", 00:11:10.730 "strip_size_kb": 64, 00:11:10.730 "state": "configuring", 00:11:10.730 "raid_level": "concat", 00:11:10.730 "superblock": true, 00:11:10.730 "num_base_bdevs": 3, 00:11:10.730 "num_base_bdevs_discovered": 1, 00:11:10.730 "num_base_bdevs_operational": 3, 00:11:10.730 "base_bdevs_list": [ 00:11:10.730 { 00:11:10.730 "name": "BaseBdev1", 00:11:10.730 "uuid": "6145d27a-4191-4405-ae3b-54ba7688d527", 00:11:10.730 "is_configured": true, 00:11:10.730 "data_offset": 2048, 00:11:10.730 "data_size": 63488 00:11:10.730 }, 00:11:10.730 { 00:11:10.730 "name": null, 00:11:10.730 "uuid": "20604fcb-f3f2-4734-a427-806b6f480bef", 00:11:10.730 "is_configured": false, 00:11:10.730 "data_offset": 2048, 00:11:10.730 "data_size": 63488 00:11:10.730 }, 00:11:10.730 { 00:11:10.730 "name": null, 00:11:10.730 "uuid": "e483dc43-0d90-4d5a-876c-0d0d6ae4978b", 00:11:10.730 "is_configured": false, 00:11:10.730 "data_offset": 2048, 00:11:10.730 "data_size": 63488 00:11:10.730 } 00:11:10.730 ] 00:11:10.730 }' 00:11:10.730 06:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:10.730 06:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.299 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.299 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.559 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:11.559 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:11.818 [2024-08-14 06:42:38.920706] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.818 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:11.818 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:11.818 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:11.818 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:11.818 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:11.818 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:11.819 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:11.819 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:11.819 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:11.819 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:11.819 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.819 06:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.077 06:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:12.077 "name": "Existed_Raid", 00:11:12.077 "uuid": "0e4f95e5-44e8-40b6-9a02-9b238771866e", 00:11:12.077 "strip_size_kb": 64, 00:11:12.077 "state": "configuring", 00:11:12.077 "raid_level": "concat", 00:11:12.077 "superblock": true, 00:11:12.077 "num_base_bdevs": 3, 00:11:12.077 "num_base_bdevs_discovered": 2, 00:11:12.077 "num_base_bdevs_operational": 3, 00:11:12.077 "base_bdevs_list": [ 00:11:12.077 { 00:11:12.077 "name": "BaseBdev1", 00:11:12.077 "uuid": "6145d27a-4191-4405-ae3b-54ba7688d527", 00:11:12.077 "is_configured": true, 00:11:12.077 "data_offset": 2048, 00:11:12.077 "data_size": 63488 00:11:12.077 }, 00:11:12.077 { 00:11:12.077 "name": null, 00:11:12.077 "uuid": "20604fcb-f3f2-4734-a427-806b6f480bef", 00:11:12.077 "is_configured": false, 00:11:12.077 "data_offset": 2048, 00:11:12.077 "data_size": 63488 00:11:12.077 }, 00:11:12.077 { 00:11:12.078 "name": "BaseBdev3", 00:11:12.078 "uuid": "e483dc43-0d90-4d5a-876c-0d0d6ae4978b", 00:11:12.078 "is_configured": true, 00:11:12.078 "data_offset": 2048, 00:11:12.078 "data_size": 63488 00:11:12.078 } 00:11:12.078 ] 00:11:12.078 }' 00:11:12.078 06:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:12.078 06:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.646 06:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:12.646 06:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:12.906 06:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:12.906 06:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:13.166 [2024-08-14 06:42:40.179002] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:13.166 06:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:13.166 06:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:13.166 06:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:13.166 06:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:13.166 06:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:13.166 06:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:13.166 06:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:13.166 06:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:13.166 06:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:13.166 06:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:13.166 06:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:13.166 06:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.488 06:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:13.488 "name": "Existed_Raid", 00:11:13.488 "uuid": "0e4f95e5-44e8-40b6-9a02-9b238771866e", 00:11:13.488 "strip_size_kb": 64, 00:11:13.488 "state": "configuring", 00:11:13.488 "raid_level": "concat", 00:11:13.488 "superblock": true, 00:11:13.488 "num_base_bdevs": 3, 00:11:13.488 "num_base_bdevs_discovered": 1, 00:11:13.488 "num_base_bdevs_operational": 3, 00:11:13.488 "base_bdevs_list": [ 00:11:13.488 { 00:11:13.488 "name": null, 00:11:13.488 "uuid": "6145d27a-4191-4405-ae3b-54ba7688d527", 00:11:13.488 "is_configured": false, 00:11:13.488 "data_offset": 2048, 00:11:13.488 "data_size": 63488 00:11:13.488 }, 00:11:13.488 { 00:11:13.488 "name": null, 00:11:13.488 "uuid": "20604fcb-f3f2-4734-a427-806b6f480bef", 00:11:13.488 "is_configured": false, 00:11:13.488 "data_offset": 2048, 00:11:13.488 "data_size": 63488 00:11:13.488 }, 00:11:13.488 { 00:11:13.488 "name": "BaseBdev3", 00:11:13.488 "uuid": "e483dc43-0d90-4d5a-876c-0d0d6ae4978b", 00:11:13.488 "is_configured": true, 00:11:13.488 "data_offset": 2048, 00:11:13.488 "data_size": 63488 00:11:13.488 } 00:11:13.488 ] 00:11:13.488 }' 00:11:13.488 06:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:13.488 06:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.057 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:14.057 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:14.057 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:14.057 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:14.317 [2024-08-14 06:42:41.463577] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.317 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:14.317 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:14.317 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:14.317 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:14.317 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:14.317 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:14.317 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:14.317 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:14.317 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:14.317 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:14.317 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.317 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:14.577 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:14.577 "name": "Existed_Raid", 00:11:14.577 "uuid": "0e4f95e5-44e8-40b6-9a02-9b238771866e", 00:11:14.577 "strip_size_kb": 64, 00:11:14.577 "state": "configuring", 00:11:14.577 "raid_level": "concat", 00:11:14.577 "superblock": true, 00:11:14.577 "num_base_bdevs": 3, 00:11:14.577 "num_base_bdevs_discovered": 2, 00:11:14.577 "num_base_bdevs_operational": 3, 00:11:14.577 "base_bdevs_list": [ 00:11:14.577 { 00:11:14.577 "name": null, 00:11:14.577 "uuid": "6145d27a-4191-4405-ae3b-54ba7688d527", 00:11:14.577 "is_configured": false, 00:11:14.577 "data_offset": 2048, 00:11:14.577 "data_size": 63488 00:11:14.577 }, 00:11:14.577 { 00:11:14.577 "name": "BaseBdev2", 00:11:14.577 "uuid": "20604fcb-f3f2-4734-a427-806b6f480bef", 00:11:14.577 "is_configured": true, 00:11:14.577 "data_offset": 2048, 00:11:14.577 "data_size": 63488 00:11:14.577 }, 00:11:14.577 { 00:11:14.577 "name": "BaseBdev3", 00:11:14.577 "uuid": "e483dc43-0d90-4d5a-876c-0d0d6ae4978b", 00:11:14.577 "is_configured": true, 00:11:14.577 "data_offset": 2048, 00:11:14.577 "data_size": 63488 00:11:14.577 } 00:11:14.577 ] 00:11:14.577 }' 00:11:14.577 06:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:14.577 06:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.146 06:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.146 06:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:15.405 06:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:15.405 06:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:15.405 06:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.665 06:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6145d27a-4191-4405-ae3b-54ba7688d527 00:11:15.924 [2024-08-14 06:42:42.992326] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:15.924 [2024-08-14 06:42:42.992534] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:15.924 [2024-08-14 06:42:42.992548] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:15.924 [2024-08-14 06:42:42.992826] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:15.924 [2024-08-14 06:42:42.992980] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:15.924 [2024-08-14 06:42:42.993000] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:11:15.924 [2024-08-14 06:42:42.993122] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.924 NewBaseBdev 00:11:15.924 06:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:15.924 06:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:11:15.924 06:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:15.924 06:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:15.924 06:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:15.924 06:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:15.924 06:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:16.183 06:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:16.183 [ 00:11:16.183 { 00:11:16.183 "name": "NewBaseBdev", 00:11:16.183 "aliases": [ 00:11:16.183 "6145d27a-4191-4405-ae3b-54ba7688d527" 00:11:16.183 ], 00:11:16.183 "product_name": "Malloc disk", 00:11:16.183 "block_size": 512, 00:11:16.183 "num_blocks": 65536, 00:11:16.183 "uuid": "6145d27a-4191-4405-ae3b-54ba7688d527", 00:11:16.183 "assigned_rate_limits": { 00:11:16.183 "rw_ios_per_sec": 0, 00:11:16.183 "rw_mbytes_per_sec": 0, 00:11:16.183 "r_mbytes_per_sec": 0, 00:11:16.183 "w_mbytes_per_sec": 0 00:11:16.183 }, 00:11:16.183 "claimed": true, 00:11:16.183 "claim_type": "exclusive_write", 00:11:16.183 "zoned": false, 00:11:16.183 "supported_io_types": { 00:11:16.183 "read": true, 00:11:16.183 "write": true, 00:11:16.183 "unmap": true, 00:11:16.183 "flush": true, 00:11:16.183 "reset": true, 00:11:16.183 "nvme_admin": false, 00:11:16.183 "nvme_io": false, 00:11:16.183 "nvme_io_md": false, 00:11:16.183 "write_zeroes": true, 00:11:16.183 "zcopy": true, 00:11:16.183 "get_zone_info": false, 00:11:16.183 "zone_management": false, 00:11:16.183 "zone_append": false, 00:11:16.183 "compare": false, 00:11:16.183 "compare_and_write": false, 00:11:16.183 "abort": true, 00:11:16.183 "seek_hole": false, 00:11:16.183 "seek_data": false, 00:11:16.183 "copy": true, 00:11:16.183 "nvme_iov_md": false 00:11:16.183 }, 00:11:16.183 "memory_domains": [ 00:11:16.183 { 00:11:16.183 "dma_device_id": "system", 00:11:16.183 "dma_device_type": 1 00:11:16.183 }, 00:11:16.183 { 00:11:16.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.183 "dma_device_type": 2 00:11:16.183 } 00:11:16.183 ], 00:11:16.183 "driver_specific": {} 00:11:16.183 } 00:11:16.183 ] 00:11:16.183 06:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:16.183 06:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:16.183 06:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:16.183 06:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:16.183 06:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:16.183 06:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:16.183 06:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:16.183 06:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:16.183 06:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:16.183 06:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:16.183 06:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:16.183 06:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.183 06:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.443 06:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:16.443 "name": "Existed_Raid", 00:11:16.443 "uuid": "0e4f95e5-44e8-40b6-9a02-9b238771866e", 00:11:16.443 "strip_size_kb": 64, 00:11:16.443 "state": "online", 00:11:16.443 "raid_level": "concat", 00:11:16.443 "superblock": true, 00:11:16.443 "num_base_bdevs": 3, 00:11:16.443 "num_base_bdevs_discovered": 3, 00:11:16.443 "num_base_bdevs_operational": 3, 00:11:16.443 "base_bdevs_list": [ 00:11:16.443 { 00:11:16.443 "name": "NewBaseBdev", 00:11:16.443 "uuid": "6145d27a-4191-4405-ae3b-54ba7688d527", 00:11:16.443 "is_configured": true, 00:11:16.443 "data_offset": 2048, 00:11:16.443 "data_size": 63488 00:11:16.443 }, 00:11:16.443 { 00:11:16.443 "name": "BaseBdev2", 00:11:16.443 "uuid": "20604fcb-f3f2-4734-a427-806b6f480bef", 00:11:16.443 "is_configured": true, 00:11:16.443 "data_offset": 2048, 00:11:16.443 "data_size": 63488 00:11:16.443 }, 00:11:16.443 { 00:11:16.443 "name": "BaseBdev3", 00:11:16.443 "uuid": "e483dc43-0d90-4d5a-876c-0d0d6ae4978b", 00:11:16.443 "is_configured": true, 00:11:16.443 "data_offset": 2048, 00:11:16.443 "data_size": 63488 00:11:16.443 } 00:11:16.443 ] 00:11:16.443 }' 00:11:16.443 06:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:16.443 06:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.381 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:17.381 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:17.381 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:17.381 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:17.381 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:17.381 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:17.381 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:17.381 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:17.381 [2024-08-14 06:42:44.490572] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.381 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:17.381 "name": "Existed_Raid", 00:11:17.381 "aliases": [ 00:11:17.381 "0e4f95e5-44e8-40b6-9a02-9b238771866e" 00:11:17.381 ], 00:11:17.381 "product_name": "Raid Volume", 00:11:17.381 "block_size": 512, 00:11:17.381 "num_blocks": 190464, 00:11:17.381 "uuid": "0e4f95e5-44e8-40b6-9a02-9b238771866e", 00:11:17.381 "assigned_rate_limits": { 00:11:17.381 "rw_ios_per_sec": 0, 00:11:17.381 "rw_mbytes_per_sec": 0, 00:11:17.381 "r_mbytes_per_sec": 0, 00:11:17.381 "w_mbytes_per_sec": 0 00:11:17.381 }, 00:11:17.381 "claimed": false, 00:11:17.381 "zoned": false, 00:11:17.381 "supported_io_types": { 00:11:17.381 "read": true, 00:11:17.381 "write": true, 00:11:17.381 "unmap": true, 00:11:17.381 "flush": true, 00:11:17.381 "reset": true, 00:11:17.381 "nvme_admin": false, 00:11:17.381 "nvme_io": false, 00:11:17.381 "nvme_io_md": false, 00:11:17.381 "write_zeroes": true, 00:11:17.381 "zcopy": false, 00:11:17.381 "get_zone_info": false, 00:11:17.381 "zone_management": false, 00:11:17.381 "zone_append": false, 00:11:17.381 "compare": false, 00:11:17.381 "compare_and_write": false, 00:11:17.381 "abort": false, 00:11:17.381 "seek_hole": false, 00:11:17.381 "seek_data": false, 00:11:17.381 "copy": false, 00:11:17.381 "nvme_iov_md": false 00:11:17.381 }, 00:11:17.381 "memory_domains": [ 00:11:17.381 { 00:11:17.381 "dma_device_id": "system", 00:11:17.381 "dma_device_type": 1 00:11:17.381 }, 00:11:17.381 { 00:11:17.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.381 "dma_device_type": 2 00:11:17.381 }, 00:11:17.381 { 00:11:17.381 "dma_device_id": "system", 00:11:17.381 "dma_device_type": 1 00:11:17.381 }, 00:11:17.381 { 00:11:17.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.381 "dma_device_type": 2 00:11:17.381 }, 00:11:17.381 { 00:11:17.381 "dma_device_id": "system", 00:11:17.381 "dma_device_type": 1 00:11:17.381 }, 00:11:17.381 { 00:11:17.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.381 "dma_device_type": 2 00:11:17.381 } 00:11:17.381 ], 00:11:17.381 "driver_specific": { 00:11:17.381 "raid": { 00:11:17.381 "uuid": "0e4f95e5-44e8-40b6-9a02-9b238771866e", 00:11:17.381 "strip_size_kb": 64, 00:11:17.381 "state": "online", 00:11:17.381 "raid_level": "concat", 00:11:17.382 "superblock": true, 00:11:17.382 "num_base_bdevs": 3, 00:11:17.382 "num_base_bdevs_discovered": 3, 00:11:17.382 "num_base_bdevs_operational": 3, 00:11:17.382 "base_bdevs_list": [ 00:11:17.382 { 00:11:17.382 "name": "NewBaseBdev", 00:11:17.382 "uuid": "6145d27a-4191-4405-ae3b-54ba7688d527", 00:11:17.382 "is_configured": true, 00:11:17.382 "data_offset": 2048, 00:11:17.382 "data_size": 63488 00:11:17.382 }, 00:11:17.382 { 00:11:17.382 "name": "BaseBdev2", 00:11:17.382 "uuid": "20604fcb-f3f2-4734-a427-806b6f480bef", 00:11:17.382 "is_configured": true, 00:11:17.382 "data_offset": 2048, 00:11:17.382 "data_size": 63488 00:11:17.382 }, 00:11:17.382 { 00:11:17.382 "name": "BaseBdev3", 00:11:17.382 "uuid": "e483dc43-0d90-4d5a-876c-0d0d6ae4978b", 00:11:17.382 "is_configured": true, 00:11:17.382 "data_offset": 2048, 00:11:17.382 "data_size": 63488 00:11:17.382 } 00:11:17.382 ] 00:11:17.382 } 00:11:17.382 } 00:11:17.382 }' 00:11:17.382 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.382 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:17.382 BaseBdev2 00:11:17.382 BaseBdev3' 00:11:17.382 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:17.382 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:17.382 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:17.642 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:17.642 "name": "NewBaseBdev", 00:11:17.642 "aliases": [ 00:11:17.642 "6145d27a-4191-4405-ae3b-54ba7688d527" 00:11:17.642 ], 00:11:17.642 "product_name": "Malloc disk", 00:11:17.642 "block_size": 512, 00:11:17.642 "num_blocks": 65536, 00:11:17.642 "uuid": "6145d27a-4191-4405-ae3b-54ba7688d527", 00:11:17.642 "assigned_rate_limits": { 00:11:17.642 "rw_ios_per_sec": 0, 00:11:17.642 "rw_mbytes_per_sec": 0, 00:11:17.642 "r_mbytes_per_sec": 0, 00:11:17.642 "w_mbytes_per_sec": 0 00:11:17.642 }, 00:11:17.642 "claimed": true, 00:11:17.642 "claim_type": "exclusive_write", 00:11:17.642 "zoned": false, 00:11:17.642 "supported_io_types": { 00:11:17.642 "read": true, 00:11:17.642 "write": true, 00:11:17.642 "unmap": true, 00:11:17.642 "flush": true, 00:11:17.642 "reset": true, 00:11:17.642 "nvme_admin": false, 00:11:17.642 "nvme_io": false, 00:11:17.642 "nvme_io_md": false, 00:11:17.642 "write_zeroes": true, 00:11:17.642 "zcopy": true, 00:11:17.642 "get_zone_info": false, 00:11:17.642 "zone_management": false, 00:11:17.642 "zone_append": false, 00:11:17.642 "compare": false, 00:11:17.642 "compare_and_write": false, 00:11:17.642 "abort": true, 00:11:17.642 "seek_hole": false, 00:11:17.642 "seek_data": false, 00:11:17.642 "copy": true, 00:11:17.642 "nvme_iov_md": false 00:11:17.642 }, 00:11:17.642 "memory_domains": [ 00:11:17.642 { 00:11:17.642 "dma_device_id": "system", 00:11:17.642 "dma_device_type": 1 00:11:17.642 }, 00:11:17.642 { 00:11:17.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.642 "dma_device_type": 2 00:11:17.642 } 00:11:17.642 ], 00:11:17.642 "driver_specific": {} 00:11:17.642 }' 00:11:17.642 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:17.901 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:17.901 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:17.902 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:17.902 06:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:17.902 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:17.902 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:17.902 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:17.902 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:17.902 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:18.161 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:18.161 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:18.161 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:18.161 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:18.161 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:18.430 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:18.430 "name": "BaseBdev2", 00:11:18.430 "aliases": [ 00:11:18.430 "20604fcb-f3f2-4734-a427-806b6f480bef" 00:11:18.430 ], 00:11:18.430 "product_name": "Malloc disk", 00:11:18.430 "block_size": 512, 00:11:18.430 "num_blocks": 65536, 00:11:18.430 "uuid": "20604fcb-f3f2-4734-a427-806b6f480bef", 00:11:18.430 "assigned_rate_limits": { 00:11:18.430 "rw_ios_per_sec": 0, 00:11:18.430 "rw_mbytes_per_sec": 0, 00:11:18.430 "r_mbytes_per_sec": 0, 00:11:18.430 "w_mbytes_per_sec": 0 00:11:18.430 }, 00:11:18.430 "claimed": true, 00:11:18.430 "claim_type": "exclusive_write", 00:11:18.430 "zoned": false, 00:11:18.430 "supported_io_types": { 00:11:18.430 "read": true, 00:11:18.430 "write": true, 00:11:18.430 "unmap": true, 00:11:18.430 "flush": true, 00:11:18.430 "reset": true, 00:11:18.430 "nvme_admin": false, 00:11:18.430 "nvme_io": false, 00:11:18.430 "nvme_io_md": false, 00:11:18.430 "write_zeroes": true, 00:11:18.430 "zcopy": true, 00:11:18.430 "get_zone_info": false, 00:11:18.430 "zone_management": false, 00:11:18.430 "zone_append": false, 00:11:18.430 "compare": false, 00:11:18.430 "compare_and_write": false, 00:11:18.430 "abort": true, 00:11:18.430 "seek_hole": false, 00:11:18.430 "seek_data": false, 00:11:18.430 "copy": true, 00:11:18.430 "nvme_iov_md": false 00:11:18.430 }, 00:11:18.430 "memory_domains": [ 00:11:18.430 { 00:11:18.430 "dma_device_id": "system", 00:11:18.430 "dma_device_type": 1 00:11:18.430 }, 00:11:18.430 { 00:11:18.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.430 "dma_device_type": 2 00:11:18.430 } 00:11:18.430 ], 00:11:18.430 "driver_specific": {} 00:11:18.430 }' 00:11:18.430 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:18.430 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:18.430 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:18.430 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:18.430 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:18.430 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:18.430 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:18.703 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:18.703 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:18.703 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:18.703 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:18.703 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:18.703 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:18.703 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:18.703 06:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:18.962 06:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:18.962 "name": "BaseBdev3", 00:11:18.962 "aliases": [ 00:11:18.962 "e483dc43-0d90-4d5a-876c-0d0d6ae4978b" 00:11:18.962 ], 00:11:18.962 "product_name": "Malloc disk", 00:11:18.962 "block_size": 512, 00:11:18.962 "num_blocks": 65536, 00:11:18.962 "uuid": "e483dc43-0d90-4d5a-876c-0d0d6ae4978b", 00:11:18.962 "assigned_rate_limits": { 00:11:18.962 "rw_ios_per_sec": 0, 00:11:18.962 "rw_mbytes_per_sec": 0, 00:11:18.962 "r_mbytes_per_sec": 0, 00:11:18.962 "w_mbytes_per_sec": 0 00:11:18.962 }, 00:11:18.962 "claimed": true, 00:11:18.962 "claim_type": "exclusive_write", 00:11:18.962 "zoned": false, 00:11:18.962 "supported_io_types": { 00:11:18.962 "read": true, 00:11:18.962 "write": true, 00:11:18.962 "unmap": true, 00:11:18.962 "flush": true, 00:11:18.962 "reset": true, 00:11:18.963 "nvme_admin": false, 00:11:18.963 "nvme_io": false, 00:11:18.963 "nvme_io_md": false, 00:11:18.963 "write_zeroes": true, 00:11:18.963 "zcopy": true, 00:11:18.963 "get_zone_info": false, 00:11:18.963 "zone_management": false, 00:11:18.963 "zone_append": false, 00:11:18.963 "compare": false, 00:11:18.963 "compare_and_write": false, 00:11:18.963 "abort": true, 00:11:18.963 "seek_hole": false, 00:11:18.963 "seek_data": false, 00:11:18.963 "copy": true, 00:11:18.963 "nvme_iov_md": false 00:11:18.963 }, 00:11:18.963 "memory_domains": [ 00:11:18.963 { 00:11:18.963 "dma_device_id": "system", 00:11:18.963 "dma_device_type": 1 00:11:18.963 }, 00:11:18.963 { 00:11:18.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.963 "dma_device_type": 2 00:11:18.963 } 00:11:18.963 ], 00:11:18.963 "driver_specific": {} 00:11:18.963 }' 00:11:18.963 06:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:18.963 06:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:18.963 06:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:18.963 06:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:19.223 06:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:19.223 06:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:19.223 06:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:19.223 06:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:19.223 06:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:19.223 06:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:19.223 06:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:19.223 06:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:19.223 06:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:19.482 [2024-08-14 06:42:46.682610] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:19.482 [2024-08-14 06:42:46.682659] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.482 [2024-08-14 06:42:46.682747] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.482 [2024-08-14 06:42:46.682812] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.482 [2024-08-14 06:42:46.682829] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:11:19.482 06:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 78962 00:11:19.482 06:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 78962 ']' 00:11:19.482 06:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 78962 00:11:19.482 06:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:11:19.482 06:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:19.482 06:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78962 00:11:19.742 killing process with pid 78962 00:11:19.742 06:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:19.742 06:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:19.742 06:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78962' 00:11:19.742 06:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 78962 00:11:19.742 06:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 78962 00:11:19.742 [2024-08-14 06:42:46.742595] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.742 [2024-08-14 06:42:46.775489] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.001 ************************************ 00:11:20.001 END TEST raid_state_function_test_sb 00:11:20.001 ************************************ 00:11:20.001 06:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:20.001 00:11:20.001 real 0m27.371s 00:11:20.001 user 0m51.103s 00:11:20.001 sys 0m3.957s 00:11:20.001 06:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:20.001 06:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.001 06:42:47 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:20.001 06:42:47 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:20.001 06:42:47 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:20.001 06:42:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:20.001 ************************************ 00:11:20.001 START TEST raid_superblock_test 00:11:20.001 ************************************ 00:11:20.001 06:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 3 00:11:20.001 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:11:20.001 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:11:20.001 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:11:20.001 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:11:20.001 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:11:20.001 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:11:20.001 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:11:20.001 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:11:20.001 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:11:20.001 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:11:20.001 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:11:20.002 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:11:20.002 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:11:20.002 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:11:20.002 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:11:20.002 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:11:20.002 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=79887 00:11:20.002 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:20.002 06:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 79887 /var/tmp/spdk-raid.sock 00:11:20.002 06:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 79887 ']' 00:11:20.002 06:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:20.002 06:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:20.002 06:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:20.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:20.002 06:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:20.002 06:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.002 [2024-08-14 06:42:47.180903] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:11:20.002 [2024-08-14 06:42:47.181039] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79887 ] 00:11:20.261 [2024-08-14 06:42:47.331474] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.261 [2024-08-14 06:42:47.385378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.261 [2024-08-14 06:42:47.429599] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.261 [2024-08-14 06:42:47.429660] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.830 06:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:20.830 06:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:11:20.830 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:11:20.830 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:11:20.830 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:11:20.830 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:11:20.830 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:20.830 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:20.830 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:11:20.830 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:20.830 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:21.089 malloc1 00:11:21.089 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:21.348 [2024-08-14 06:42:48.523407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:21.348 [2024-08-14 06:42:48.523496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.348 [2024-08-14 06:42:48.523529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:21.348 [2024-08-14 06:42:48.523539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.348 [2024-08-14 06:42:48.525882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.348 [2024-08-14 06:42:48.525931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:21.348 pt1 00:11:21.348 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:11:21.348 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:11:21.348 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:11:21.348 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:11:21.348 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:21.349 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:21.349 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:11:21.349 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:21.349 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:21.606 malloc2 00:11:21.606 06:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:21.865 [2024-08-14 06:42:49.055903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:21.865 [2024-08-14 06:42:49.055985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.865 [2024-08-14 06:42:49.056009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:21.865 [2024-08-14 06:42:49.056018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.865 [2024-08-14 06:42:49.058404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.865 [2024-08-14 06:42:49.058448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:21.865 pt2 00:11:21.865 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:11:21.865 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:11:21.865 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:11:21.865 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:11:21.865 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:21.865 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:21.865 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:11:21.865 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:21.865 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:11:22.123 malloc3 00:11:22.123 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:22.381 [2024-08-14 06:42:49.512914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:22.381 [2024-08-14 06:42:49.512997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.381 [2024-08-14 06:42:49.513023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:22.381 [2024-08-14 06:42:49.513033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.381 [2024-08-14 06:42:49.515356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.381 [2024-08-14 06:42:49.515396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:22.381 pt3 00:11:22.381 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:11:22.381 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:11:22.381 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:11:22.640 [2024-08-14 06:42:49.736700] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:22.640 [2024-08-14 06:42:49.738856] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:22.640 [2024-08-14 06:42:49.738937] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:22.640 [2024-08-14 06:42:49.739190] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:22.640 [2024-08-14 06:42:49.739218] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:22.640 [2024-08-14 06:42:49.739546] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:22.640 [2024-08-14 06:42:49.739717] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:22.640 [2024-08-14 06:42:49.739735] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:22.640 [2024-08-14 06:42:49.739913] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.640 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:22.640 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:22.640 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:22.640 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:22.640 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:22.640 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:22.640 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:22.640 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:22.640 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:22.640 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:22.640 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.640 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.899 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:22.899 "name": "raid_bdev1", 00:11:22.899 "uuid": "31757fd1-1bf9-47a7-9796-18b6c76296a9", 00:11:22.899 "strip_size_kb": 64, 00:11:22.899 "state": "online", 00:11:22.899 "raid_level": "concat", 00:11:22.899 "superblock": true, 00:11:22.899 "num_base_bdevs": 3, 00:11:22.899 "num_base_bdevs_discovered": 3, 00:11:22.899 "num_base_bdevs_operational": 3, 00:11:22.899 "base_bdevs_list": [ 00:11:22.899 { 00:11:22.899 "name": "pt1", 00:11:22.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:22.899 "is_configured": true, 00:11:22.899 "data_offset": 2048, 00:11:22.899 "data_size": 63488 00:11:22.899 }, 00:11:22.899 { 00:11:22.899 "name": "pt2", 00:11:22.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.899 "is_configured": true, 00:11:22.899 "data_offset": 2048, 00:11:22.899 "data_size": 63488 00:11:22.899 }, 00:11:22.899 { 00:11:22.899 "name": "pt3", 00:11:22.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:22.899 "is_configured": true, 00:11:22.899 "data_offset": 2048, 00:11:22.899 "data_size": 63488 00:11:22.899 } 00:11:22.899 ] 00:11:22.899 }' 00:11:22.899 06:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:22.899 06:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.468 06:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:11:23.468 06:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:23.468 06:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:23.468 06:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:23.468 06:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:23.468 06:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:23.468 06:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:23.468 06:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:23.727 [2024-08-14 06:42:50.787188] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.727 06:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:23.727 "name": "raid_bdev1", 00:11:23.727 "aliases": [ 00:11:23.727 "31757fd1-1bf9-47a7-9796-18b6c76296a9" 00:11:23.727 ], 00:11:23.727 "product_name": "Raid Volume", 00:11:23.727 "block_size": 512, 00:11:23.727 "num_blocks": 190464, 00:11:23.727 "uuid": "31757fd1-1bf9-47a7-9796-18b6c76296a9", 00:11:23.727 "assigned_rate_limits": { 00:11:23.727 "rw_ios_per_sec": 0, 00:11:23.727 "rw_mbytes_per_sec": 0, 00:11:23.727 "r_mbytes_per_sec": 0, 00:11:23.727 "w_mbytes_per_sec": 0 00:11:23.727 }, 00:11:23.727 "claimed": false, 00:11:23.727 "zoned": false, 00:11:23.727 "supported_io_types": { 00:11:23.727 "read": true, 00:11:23.727 "write": true, 00:11:23.727 "unmap": true, 00:11:23.727 "flush": true, 00:11:23.727 "reset": true, 00:11:23.727 "nvme_admin": false, 00:11:23.727 "nvme_io": false, 00:11:23.727 "nvme_io_md": false, 00:11:23.727 "write_zeroes": true, 00:11:23.727 "zcopy": false, 00:11:23.727 "get_zone_info": false, 00:11:23.727 "zone_management": false, 00:11:23.727 "zone_append": false, 00:11:23.727 "compare": false, 00:11:23.727 "compare_and_write": false, 00:11:23.727 "abort": false, 00:11:23.727 "seek_hole": false, 00:11:23.727 "seek_data": false, 00:11:23.727 "copy": false, 00:11:23.727 "nvme_iov_md": false 00:11:23.727 }, 00:11:23.727 "memory_domains": [ 00:11:23.727 { 00:11:23.727 "dma_device_id": "system", 00:11:23.727 "dma_device_type": 1 00:11:23.727 }, 00:11:23.727 { 00:11:23.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.727 "dma_device_type": 2 00:11:23.727 }, 00:11:23.727 { 00:11:23.727 "dma_device_id": "system", 00:11:23.727 "dma_device_type": 1 00:11:23.727 }, 00:11:23.727 { 00:11:23.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.727 "dma_device_type": 2 00:11:23.727 }, 00:11:23.727 { 00:11:23.727 "dma_device_id": "system", 00:11:23.727 "dma_device_type": 1 00:11:23.727 }, 00:11:23.727 { 00:11:23.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.727 "dma_device_type": 2 00:11:23.727 } 00:11:23.727 ], 00:11:23.727 "driver_specific": { 00:11:23.727 "raid": { 00:11:23.727 "uuid": "31757fd1-1bf9-47a7-9796-18b6c76296a9", 00:11:23.727 "strip_size_kb": 64, 00:11:23.727 "state": "online", 00:11:23.727 "raid_level": "concat", 00:11:23.727 "superblock": true, 00:11:23.727 "num_base_bdevs": 3, 00:11:23.727 "num_base_bdevs_discovered": 3, 00:11:23.727 "num_base_bdevs_operational": 3, 00:11:23.727 "base_bdevs_list": [ 00:11:23.727 { 00:11:23.727 "name": "pt1", 00:11:23.727 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:23.727 "is_configured": true, 00:11:23.727 "data_offset": 2048, 00:11:23.727 "data_size": 63488 00:11:23.727 }, 00:11:23.727 { 00:11:23.727 "name": "pt2", 00:11:23.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:23.727 "is_configured": true, 00:11:23.727 "data_offset": 2048, 00:11:23.727 "data_size": 63488 00:11:23.727 }, 00:11:23.727 { 00:11:23.727 "name": "pt3", 00:11:23.727 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:23.727 "is_configured": true, 00:11:23.727 "data_offset": 2048, 00:11:23.727 "data_size": 63488 00:11:23.727 } 00:11:23.727 ] 00:11:23.727 } 00:11:23.727 } 00:11:23.727 }' 00:11:23.727 06:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.727 06:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:23.727 pt2 00:11:23.727 pt3' 00:11:23.727 06:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:23.727 06:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:23.727 06:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:23.987 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:23.987 "name": "pt1", 00:11:23.987 "aliases": [ 00:11:23.987 "00000000-0000-0000-0000-000000000001" 00:11:23.987 ], 00:11:23.987 "product_name": "passthru", 00:11:23.987 "block_size": 512, 00:11:23.987 "num_blocks": 65536, 00:11:23.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:23.987 "assigned_rate_limits": { 00:11:23.987 "rw_ios_per_sec": 0, 00:11:23.987 "rw_mbytes_per_sec": 0, 00:11:23.987 "r_mbytes_per_sec": 0, 00:11:23.987 "w_mbytes_per_sec": 0 00:11:23.987 }, 00:11:23.987 "claimed": true, 00:11:23.987 "claim_type": "exclusive_write", 00:11:23.987 "zoned": false, 00:11:23.987 "supported_io_types": { 00:11:23.987 "read": true, 00:11:23.987 "write": true, 00:11:23.987 "unmap": true, 00:11:23.987 "flush": true, 00:11:23.987 "reset": true, 00:11:23.987 "nvme_admin": false, 00:11:23.987 "nvme_io": false, 00:11:23.987 "nvme_io_md": false, 00:11:23.987 "write_zeroes": true, 00:11:23.987 "zcopy": true, 00:11:23.987 "get_zone_info": false, 00:11:23.987 "zone_management": false, 00:11:23.987 "zone_append": false, 00:11:23.987 "compare": false, 00:11:23.987 "compare_and_write": false, 00:11:23.987 "abort": true, 00:11:23.987 "seek_hole": false, 00:11:23.987 "seek_data": false, 00:11:23.987 "copy": true, 00:11:23.987 "nvme_iov_md": false 00:11:23.987 }, 00:11:23.987 "memory_domains": [ 00:11:23.987 { 00:11:23.987 "dma_device_id": "system", 00:11:23.987 "dma_device_type": 1 00:11:23.987 }, 00:11:23.987 { 00:11:23.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.987 "dma_device_type": 2 00:11:23.987 } 00:11:23.987 ], 00:11:23.987 "driver_specific": { 00:11:23.987 "passthru": { 00:11:23.987 "name": "pt1", 00:11:23.987 "base_bdev_name": "malloc1" 00:11:23.987 } 00:11:23.987 } 00:11:23.987 }' 00:11:23.987 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:23.987 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:23.987 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:23.987 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:24.246 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:24.246 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:24.246 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:24.246 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:24.246 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:24.246 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:24.246 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:24.246 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:24.246 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:24.246 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:24.246 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:24.506 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:24.506 "name": "pt2", 00:11:24.506 "aliases": [ 00:11:24.506 "00000000-0000-0000-0000-000000000002" 00:11:24.506 ], 00:11:24.506 "product_name": "passthru", 00:11:24.506 "block_size": 512, 00:11:24.506 "num_blocks": 65536, 00:11:24.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.506 "assigned_rate_limits": { 00:11:24.506 "rw_ios_per_sec": 0, 00:11:24.506 "rw_mbytes_per_sec": 0, 00:11:24.506 "r_mbytes_per_sec": 0, 00:11:24.506 "w_mbytes_per_sec": 0 00:11:24.506 }, 00:11:24.506 "claimed": true, 00:11:24.506 "claim_type": "exclusive_write", 00:11:24.506 "zoned": false, 00:11:24.506 "supported_io_types": { 00:11:24.506 "read": true, 00:11:24.506 "write": true, 00:11:24.506 "unmap": true, 00:11:24.506 "flush": true, 00:11:24.506 "reset": true, 00:11:24.506 "nvme_admin": false, 00:11:24.506 "nvme_io": false, 00:11:24.506 "nvme_io_md": false, 00:11:24.506 "write_zeroes": true, 00:11:24.506 "zcopy": true, 00:11:24.506 "get_zone_info": false, 00:11:24.506 "zone_management": false, 00:11:24.506 "zone_append": false, 00:11:24.506 "compare": false, 00:11:24.506 "compare_and_write": false, 00:11:24.506 "abort": true, 00:11:24.506 "seek_hole": false, 00:11:24.506 "seek_data": false, 00:11:24.506 "copy": true, 00:11:24.506 "nvme_iov_md": false 00:11:24.506 }, 00:11:24.506 "memory_domains": [ 00:11:24.506 { 00:11:24.506 "dma_device_id": "system", 00:11:24.506 "dma_device_type": 1 00:11:24.506 }, 00:11:24.506 { 00:11:24.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.506 "dma_device_type": 2 00:11:24.506 } 00:11:24.506 ], 00:11:24.506 "driver_specific": { 00:11:24.506 "passthru": { 00:11:24.506 "name": "pt2", 00:11:24.506 "base_bdev_name": "malloc2" 00:11:24.506 } 00:11:24.506 } 00:11:24.506 }' 00:11:24.506 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:24.506 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:24.765 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:24.765 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:24.765 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:24.765 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:24.765 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:24.765 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:24.765 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:24.765 06:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:24.765 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:25.025 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:25.025 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:25.025 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:25.025 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:25.025 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:25.025 "name": "pt3", 00:11:25.025 "aliases": [ 00:11:25.025 "00000000-0000-0000-0000-000000000003" 00:11:25.025 ], 00:11:25.025 "product_name": "passthru", 00:11:25.025 "block_size": 512, 00:11:25.025 "num_blocks": 65536, 00:11:25.025 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.025 "assigned_rate_limits": { 00:11:25.025 "rw_ios_per_sec": 0, 00:11:25.025 "rw_mbytes_per_sec": 0, 00:11:25.025 "r_mbytes_per_sec": 0, 00:11:25.025 "w_mbytes_per_sec": 0 00:11:25.025 }, 00:11:25.025 "claimed": true, 00:11:25.025 "claim_type": "exclusive_write", 00:11:25.025 "zoned": false, 00:11:25.025 "supported_io_types": { 00:11:25.025 "read": true, 00:11:25.025 "write": true, 00:11:25.025 "unmap": true, 00:11:25.025 "flush": true, 00:11:25.025 "reset": true, 00:11:25.025 "nvme_admin": false, 00:11:25.025 "nvme_io": false, 00:11:25.025 "nvme_io_md": false, 00:11:25.025 "write_zeroes": true, 00:11:25.025 "zcopy": true, 00:11:25.025 "get_zone_info": false, 00:11:25.025 "zone_management": false, 00:11:25.025 "zone_append": false, 00:11:25.025 "compare": false, 00:11:25.025 "compare_and_write": false, 00:11:25.025 "abort": true, 00:11:25.025 "seek_hole": false, 00:11:25.025 "seek_data": false, 00:11:25.025 "copy": true, 00:11:25.025 "nvme_iov_md": false 00:11:25.025 }, 00:11:25.025 "memory_domains": [ 00:11:25.025 { 00:11:25.025 "dma_device_id": "system", 00:11:25.025 "dma_device_type": 1 00:11:25.025 }, 00:11:25.025 { 00:11:25.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.025 "dma_device_type": 2 00:11:25.025 } 00:11:25.025 ], 00:11:25.025 "driver_specific": { 00:11:25.025 "passthru": { 00:11:25.025 "name": "pt3", 00:11:25.025 "base_bdev_name": "malloc3" 00:11:25.025 } 00:11:25.025 } 00:11:25.025 }' 00:11:25.025 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:25.284 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:25.284 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:25.284 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:25.284 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:25.284 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:25.284 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:25.284 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:25.544 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:25.544 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:25.544 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:25.544 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:25.544 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:25.544 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:11:25.803 [2024-08-14 06:42:52.891704] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.803 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=31757fd1-1bf9-47a7-9796-18b6c76296a9 00:11:25.803 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 31757fd1-1bf9-47a7-9796-18b6c76296a9 ']' 00:11:25.803 06:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:26.063 [2024-08-14 06:42:53.119004] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.063 [2024-08-14 06:42:53.119059] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.063 [2024-08-14 06:42:53.119164] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.063 [2024-08-14 06:42:53.119253] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.063 [2024-08-14 06:42:53.119273] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:26.063 06:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.063 06:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:11:26.322 06:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:11:26.322 06:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:11:26.322 06:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.322 06:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:26.582 06:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.582 06:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:26.841 06:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.841 06:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:27.099 06:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:27.099 06:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:27.358 06:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:11:27.358 06:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:27.358 06:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:11:27.358 06:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:27.358 06:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.358 06:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:11:27.358 06:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.358 06:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:11:27.358 06:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.358 06:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:11:27.358 06:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.358 06:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:27.358 06:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:27.617 [2024-08-14 06:42:54.652832] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:27.617 [2024-08-14 06:42:54.654987] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:27.617 [2024-08-14 06:42:54.655138] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:27.617 [2024-08-14 06:42:54.655231] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:27.617 [2024-08-14 06:42:54.655308] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:27.617 [2024-08-14 06:42:54.655331] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:27.617 [2024-08-14 06:42:54.655348] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:27.617 [2024-08-14 06:42:54.655360] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:11:27.617 request: 00:11:27.617 { 00:11:27.617 "name": "raid_bdev1", 00:11:27.617 "raid_level": "concat", 00:11:27.617 "base_bdevs": [ 00:11:27.617 "malloc1", 00:11:27.617 "malloc2", 00:11:27.617 "malloc3" 00:11:27.617 ], 00:11:27.617 "strip_size_kb": 64, 00:11:27.617 "superblock": false, 00:11:27.617 "method": "bdev_raid_create", 00:11:27.617 "req_id": 1 00:11:27.617 } 00:11:27.617 Got JSON-RPC error response 00:11:27.617 response: 00:11:27.617 { 00:11:27.617 "code": -17, 00:11:27.617 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:27.617 } 00:11:27.617 06:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:11:27.617 06:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:11:27.617 06:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:11:27.617 06:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:11:27.617 06:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:27.617 06:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:11:27.876 06:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:11:27.876 06:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:11:27.876 06:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:28.136 [2024-08-14 06:42:55.156623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:28.136 [2024-08-14 06:42:55.156811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.136 [2024-08-14 06:42:55.156858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:28.136 [2024-08-14 06:42:55.156901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.136 [2024-08-14 06:42:55.159487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.136 [2024-08-14 06:42:55.159607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:28.136 [2024-08-14 06:42:55.159760] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:28.136 [2024-08-14 06:42:55.159845] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:28.136 pt1 00:11:28.136 06:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:28.136 06:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:28.136 06:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:28.136 06:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:28.136 06:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:28.136 06:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:28.136 06:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:28.136 06:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:28.136 06:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:28.136 06:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:28.136 06:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.136 06:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:28.395 06:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:28.395 "name": "raid_bdev1", 00:11:28.395 "uuid": "31757fd1-1bf9-47a7-9796-18b6c76296a9", 00:11:28.395 "strip_size_kb": 64, 00:11:28.395 "state": "configuring", 00:11:28.395 "raid_level": "concat", 00:11:28.395 "superblock": true, 00:11:28.395 "num_base_bdevs": 3, 00:11:28.395 "num_base_bdevs_discovered": 1, 00:11:28.395 "num_base_bdevs_operational": 3, 00:11:28.395 "base_bdevs_list": [ 00:11:28.395 { 00:11:28.395 "name": "pt1", 00:11:28.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:28.395 "is_configured": true, 00:11:28.395 "data_offset": 2048, 00:11:28.395 "data_size": 63488 00:11:28.395 }, 00:11:28.395 { 00:11:28.395 "name": null, 00:11:28.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.395 "is_configured": false, 00:11:28.395 "data_offset": 2048, 00:11:28.395 "data_size": 63488 00:11:28.395 }, 00:11:28.395 { 00:11:28.395 "name": null, 00:11:28.395 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.395 "is_configured": false, 00:11:28.395 "data_offset": 2048, 00:11:28.395 "data_size": 63488 00:11:28.395 } 00:11:28.395 ] 00:11:28.395 }' 00:11:28.395 06:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:28.395 06:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.982 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:11:28.982 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:29.240 [2024-08-14 06:42:56.294753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:29.240 [2024-08-14 06:42:56.294953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.240 [2024-08-14 06:42:56.294989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:29.240 [2024-08-14 06:42:56.295001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.240 [2024-08-14 06:42:56.295516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.240 [2024-08-14 06:42:56.295551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:29.240 [2024-08-14 06:42:56.295650] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:29.240 [2024-08-14 06:42:56.295675] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:29.240 pt2 00:11:29.240 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:29.500 [2024-08-14 06:42:56.546433] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:29.500 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:29.500 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:29.500 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:29.500 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:29.500 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:29.500 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:29.500 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:29.500 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:29.500 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:29.500 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:29.500 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.500 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.759 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:29.759 "name": "raid_bdev1", 00:11:29.759 "uuid": "31757fd1-1bf9-47a7-9796-18b6c76296a9", 00:11:29.759 "strip_size_kb": 64, 00:11:29.759 "state": "configuring", 00:11:29.759 "raid_level": "concat", 00:11:29.759 "superblock": true, 00:11:29.759 "num_base_bdevs": 3, 00:11:29.759 "num_base_bdevs_discovered": 1, 00:11:29.759 "num_base_bdevs_operational": 3, 00:11:29.759 "base_bdevs_list": [ 00:11:29.759 { 00:11:29.759 "name": "pt1", 00:11:29.759 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:29.759 "is_configured": true, 00:11:29.759 "data_offset": 2048, 00:11:29.759 "data_size": 63488 00:11:29.759 }, 00:11:29.759 { 00:11:29.759 "name": null, 00:11:29.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.759 "is_configured": false, 00:11:29.759 "data_offset": 2048, 00:11:29.759 "data_size": 63488 00:11:29.759 }, 00:11:29.759 { 00:11:29.759 "name": null, 00:11:29.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.759 "is_configured": false, 00:11:29.759 "data_offset": 2048, 00:11:29.759 "data_size": 63488 00:11:29.759 } 00:11:29.759 ] 00:11:29.759 }' 00:11:29.759 06:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:29.759 06:42:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.324 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:11:30.324 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:11:30.324 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:30.582 [2024-08-14 06:42:57.653073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:30.582 [2024-08-14 06:42:57.653189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.582 [2024-08-14 06:42:57.653212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:30.582 [2024-08-14 06:42:57.653233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.582 [2024-08-14 06:42:57.653728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.582 [2024-08-14 06:42:57.653759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:30.582 [2024-08-14 06:42:57.653847] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:30.582 [2024-08-14 06:42:57.653874] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:30.582 pt2 00:11:30.582 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:11:30.582 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:11:30.583 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:30.842 [2024-08-14 06:42:57.920729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:30.842 [2024-08-14 06:42:57.920919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.842 [2024-08-14 06:42:57.920946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:30.842 [2024-08-14 06:42:57.920963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.842 [2024-08-14 06:42:57.921511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.842 [2024-08-14 06:42:57.921540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:30.842 [2024-08-14 06:42:57.921631] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:30.842 [2024-08-14 06:42:57.921658] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:30.842 [2024-08-14 06:42:57.921793] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:11:30.842 [2024-08-14 06:42:57.921809] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:30.842 [2024-08-14 06:42:57.922075] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:11:30.842 [2024-08-14 06:42:57.922234] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:11:30.842 [2024-08-14 06:42:57.922247] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:11:30.842 [2024-08-14 06:42:57.922375] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.842 pt3 00:11:30.842 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:11:30.842 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:11:30.842 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:30.842 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:30.842 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:30.842 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:30.842 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:30.842 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:30.842 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:30.842 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:30.842 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:30.842 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:30.842 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.842 06:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.101 06:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:31.101 "name": "raid_bdev1", 00:11:31.101 "uuid": "31757fd1-1bf9-47a7-9796-18b6c76296a9", 00:11:31.101 "strip_size_kb": 64, 00:11:31.101 "state": "online", 00:11:31.101 "raid_level": "concat", 00:11:31.101 "superblock": true, 00:11:31.101 "num_base_bdevs": 3, 00:11:31.101 "num_base_bdevs_discovered": 3, 00:11:31.101 "num_base_bdevs_operational": 3, 00:11:31.101 "base_bdevs_list": [ 00:11:31.101 { 00:11:31.101 "name": "pt1", 00:11:31.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:31.101 "is_configured": true, 00:11:31.101 "data_offset": 2048, 00:11:31.101 "data_size": 63488 00:11:31.101 }, 00:11:31.101 { 00:11:31.101 "name": "pt2", 00:11:31.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:31.101 "is_configured": true, 00:11:31.101 "data_offset": 2048, 00:11:31.101 "data_size": 63488 00:11:31.101 }, 00:11:31.101 { 00:11:31.101 "name": "pt3", 00:11:31.101 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:31.101 "is_configured": true, 00:11:31.101 "data_offset": 2048, 00:11:31.101 "data_size": 63488 00:11:31.101 } 00:11:31.101 ] 00:11:31.101 }' 00:11:31.101 06:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:31.101 06:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.668 06:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:11:31.668 06:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:31.668 06:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:31.668 06:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:31.668 06:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:31.668 06:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:31.668 06:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:31.668 06:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:31.926 [2024-08-14 06:42:59.071125] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:31.926 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:31.926 "name": "raid_bdev1", 00:11:31.926 "aliases": [ 00:11:31.926 "31757fd1-1bf9-47a7-9796-18b6c76296a9" 00:11:31.926 ], 00:11:31.926 "product_name": "Raid Volume", 00:11:31.926 "block_size": 512, 00:11:31.926 "num_blocks": 190464, 00:11:31.926 "uuid": "31757fd1-1bf9-47a7-9796-18b6c76296a9", 00:11:31.926 "assigned_rate_limits": { 00:11:31.926 "rw_ios_per_sec": 0, 00:11:31.926 "rw_mbytes_per_sec": 0, 00:11:31.926 "r_mbytes_per_sec": 0, 00:11:31.926 "w_mbytes_per_sec": 0 00:11:31.926 }, 00:11:31.926 "claimed": false, 00:11:31.926 "zoned": false, 00:11:31.927 "supported_io_types": { 00:11:31.927 "read": true, 00:11:31.927 "write": true, 00:11:31.927 "unmap": true, 00:11:31.927 "flush": true, 00:11:31.927 "reset": true, 00:11:31.927 "nvme_admin": false, 00:11:31.927 "nvme_io": false, 00:11:31.927 "nvme_io_md": false, 00:11:31.927 "write_zeroes": true, 00:11:31.927 "zcopy": false, 00:11:31.927 "get_zone_info": false, 00:11:31.927 "zone_management": false, 00:11:31.927 "zone_append": false, 00:11:31.927 "compare": false, 00:11:31.927 "compare_and_write": false, 00:11:31.927 "abort": false, 00:11:31.927 "seek_hole": false, 00:11:31.927 "seek_data": false, 00:11:31.927 "copy": false, 00:11:31.927 "nvme_iov_md": false 00:11:31.927 }, 00:11:31.927 "memory_domains": [ 00:11:31.927 { 00:11:31.927 "dma_device_id": "system", 00:11:31.927 "dma_device_type": 1 00:11:31.927 }, 00:11:31.927 { 00:11:31.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.927 "dma_device_type": 2 00:11:31.927 }, 00:11:31.927 { 00:11:31.927 "dma_device_id": "system", 00:11:31.927 "dma_device_type": 1 00:11:31.927 }, 00:11:31.927 { 00:11:31.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.927 "dma_device_type": 2 00:11:31.927 }, 00:11:31.927 { 00:11:31.927 "dma_device_id": "system", 00:11:31.927 "dma_device_type": 1 00:11:31.927 }, 00:11:31.927 { 00:11:31.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.927 "dma_device_type": 2 00:11:31.927 } 00:11:31.927 ], 00:11:31.927 "driver_specific": { 00:11:31.927 "raid": { 00:11:31.927 "uuid": "31757fd1-1bf9-47a7-9796-18b6c76296a9", 00:11:31.927 "strip_size_kb": 64, 00:11:31.927 "state": "online", 00:11:31.927 "raid_level": "concat", 00:11:31.927 "superblock": true, 00:11:31.927 "num_base_bdevs": 3, 00:11:31.927 "num_base_bdevs_discovered": 3, 00:11:31.927 "num_base_bdevs_operational": 3, 00:11:31.927 "base_bdevs_list": [ 00:11:31.927 { 00:11:31.927 "name": "pt1", 00:11:31.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:31.927 "is_configured": true, 00:11:31.927 "data_offset": 2048, 00:11:31.927 "data_size": 63488 00:11:31.927 }, 00:11:31.927 { 00:11:31.927 "name": "pt2", 00:11:31.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:31.927 "is_configured": true, 00:11:31.927 "data_offset": 2048, 00:11:31.927 "data_size": 63488 00:11:31.927 }, 00:11:31.927 { 00:11:31.927 "name": "pt3", 00:11:31.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:31.927 "is_configured": true, 00:11:31.927 "data_offset": 2048, 00:11:31.927 "data_size": 63488 00:11:31.927 } 00:11:31.927 ] 00:11:31.927 } 00:11:31.927 } 00:11:31.927 }' 00:11:31.927 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:31.927 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:31.927 pt2 00:11:31.927 pt3' 00:11:31.927 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:31.927 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:31.927 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:32.186 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:32.186 "name": "pt1", 00:11:32.186 "aliases": [ 00:11:32.186 "00000000-0000-0000-0000-000000000001" 00:11:32.186 ], 00:11:32.186 "product_name": "passthru", 00:11:32.186 "block_size": 512, 00:11:32.186 "num_blocks": 65536, 00:11:32.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:32.186 "assigned_rate_limits": { 00:11:32.186 "rw_ios_per_sec": 0, 00:11:32.186 "rw_mbytes_per_sec": 0, 00:11:32.186 "r_mbytes_per_sec": 0, 00:11:32.186 "w_mbytes_per_sec": 0 00:11:32.186 }, 00:11:32.186 "claimed": true, 00:11:32.186 "claim_type": "exclusive_write", 00:11:32.186 "zoned": false, 00:11:32.186 "supported_io_types": { 00:11:32.186 "read": true, 00:11:32.186 "write": true, 00:11:32.186 "unmap": true, 00:11:32.186 "flush": true, 00:11:32.186 "reset": true, 00:11:32.186 "nvme_admin": false, 00:11:32.186 "nvme_io": false, 00:11:32.186 "nvme_io_md": false, 00:11:32.186 "write_zeroes": true, 00:11:32.186 "zcopy": true, 00:11:32.186 "get_zone_info": false, 00:11:32.186 "zone_management": false, 00:11:32.186 "zone_append": false, 00:11:32.186 "compare": false, 00:11:32.186 "compare_and_write": false, 00:11:32.186 "abort": true, 00:11:32.186 "seek_hole": false, 00:11:32.186 "seek_data": false, 00:11:32.186 "copy": true, 00:11:32.186 "nvme_iov_md": false 00:11:32.186 }, 00:11:32.186 "memory_domains": [ 00:11:32.186 { 00:11:32.186 "dma_device_id": "system", 00:11:32.186 "dma_device_type": 1 00:11:32.186 }, 00:11:32.186 { 00:11:32.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.186 "dma_device_type": 2 00:11:32.186 } 00:11:32.186 ], 00:11:32.186 "driver_specific": { 00:11:32.186 "passthru": { 00:11:32.186 "name": "pt1", 00:11:32.186 "base_bdev_name": "malloc1" 00:11:32.186 } 00:11:32.186 } 00:11:32.186 }' 00:11:32.186 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:32.445 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:32.445 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:32.445 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:32.445 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:32.445 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:32.445 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:32.445 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:32.445 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:32.445 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:32.711 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:32.711 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:32.711 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:32.711 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:32.711 06:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:32.968 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:32.968 "name": "pt2", 00:11:32.968 "aliases": [ 00:11:32.968 "00000000-0000-0000-0000-000000000002" 00:11:32.968 ], 00:11:32.968 "product_name": "passthru", 00:11:32.968 "block_size": 512, 00:11:32.968 "num_blocks": 65536, 00:11:32.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:32.968 "assigned_rate_limits": { 00:11:32.968 "rw_ios_per_sec": 0, 00:11:32.968 "rw_mbytes_per_sec": 0, 00:11:32.968 "r_mbytes_per_sec": 0, 00:11:32.968 "w_mbytes_per_sec": 0 00:11:32.968 }, 00:11:32.968 "claimed": true, 00:11:32.968 "claim_type": "exclusive_write", 00:11:32.968 "zoned": false, 00:11:32.968 "supported_io_types": { 00:11:32.968 "read": true, 00:11:32.968 "write": true, 00:11:32.968 "unmap": true, 00:11:32.968 "flush": true, 00:11:32.968 "reset": true, 00:11:32.968 "nvme_admin": false, 00:11:32.968 "nvme_io": false, 00:11:32.968 "nvme_io_md": false, 00:11:32.968 "write_zeroes": true, 00:11:32.968 "zcopy": true, 00:11:32.968 "get_zone_info": false, 00:11:32.968 "zone_management": false, 00:11:32.969 "zone_append": false, 00:11:32.969 "compare": false, 00:11:32.969 "compare_and_write": false, 00:11:32.969 "abort": true, 00:11:32.969 "seek_hole": false, 00:11:32.969 "seek_data": false, 00:11:32.969 "copy": true, 00:11:32.969 "nvme_iov_md": false 00:11:32.969 }, 00:11:32.969 "memory_domains": [ 00:11:32.969 { 00:11:32.969 "dma_device_id": "system", 00:11:32.969 "dma_device_type": 1 00:11:32.969 }, 00:11:32.969 { 00:11:32.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.969 "dma_device_type": 2 00:11:32.969 } 00:11:32.969 ], 00:11:32.969 "driver_specific": { 00:11:32.969 "passthru": { 00:11:32.969 "name": "pt2", 00:11:32.969 "base_bdev_name": "malloc2" 00:11:32.969 } 00:11:32.969 } 00:11:32.969 }' 00:11:32.969 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:32.969 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:32.969 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:32.969 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:32.969 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:33.228 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:33.228 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:33.228 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:33.228 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:33.228 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:33.228 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:33.228 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:33.228 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:33.228 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:33.228 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:33.487 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:33.487 "name": "pt3", 00:11:33.487 "aliases": [ 00:11:33.487 "00000000-0000-0000-0000-000000000003" 00:11:33.487 ], 00:11:33.487 "product_name": "passthru", 00:11:33.487 "block_size": 512, 00:11:33.487 "num_blocks": 65536, 00:11:33.487 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.487 "assigned_rate_limits": { 00:11:33.487 "rw_ios_per_sec": 0, 00:11:33.487 "rw_mbytes_per_sec": 0, 00:11:33.487 "r_mbytes_per_sec": 0, 00:11:33.487 "w_mbytes_per_sec": 0 00:11:33.487 }, 00:11:33.487 "claimed": true, 00:11:33.487 "claim_type": "exclusive_write", 00:11:33.487 "zoned": false, 00:11:33.487 "supported_io_types": { 00:11:33.487 "read": true, 00:11:33.487 "write": true, 00:11:33.487 "unmap": true, 00:11:33.487 "flush": true, 00:11:33.487 "reset": true, 00:11:33.487 "nvme_admin": false, 00:11:33.487 "nvme_io": false, 00:11:33.487 "nvme_io_md": false, 00:11:33.488 "write_zeroes": true, 00:11:33.488 "zcopy": true, 00:11:33.488 "get_zone_info": false, 00:11:33.488 "zone_management": false, 00:11:33.488 "zone_append": false, 00:11:33.488 "compare": false, 00:11:33.488 "compare_and_write": false, 00:11:33.488 "abort": true, 00:11:33.488 "seek_hole": false, 00:11:33.488 "seek_data": false, 00:11:33.488 "copy": true, 00:11:33.488 "nvme_iov_md": false 00:11:33.488 }, 00:11:33.488 "memory_domains": [ 00:11:33.488 { 00:11:33.488 "dma_device_id": "system", 00:11:33.488 "dma_device_type": 1 00:11:33.488 }, 00:11:33.488 { 00:11:33.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.488 "dma_device_type": 2 00:11:33.488 } 00:11:33.488 ], 00:11:33.488 "driver_specific": { 00:11:33.488 "passthru": { 00:11:33.488 "name": "pt3", 00:11:33.488 "base_bdev_name": "malloc3" 00:11:33.488 } 00:11:33.488 } 00:11:33.488 }' 00:11:33.488 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:33.488 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:33.748 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:33.748 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:33.748 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:33.748 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:33.748 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:33.748 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:33.748 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:33.748 06:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:34.007 06:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:34.007 06:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:34.007 06:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:34.007 06:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:11:34.007 [2024-08-14 06:43:01.240502] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.274 06:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 31757fd1-1bf9-47a7-9796-18b6c76296a9 '!=' 31757fd1-1bf9-47a7-9796-18b6c76296a9 ']' 00:11:34.274 06:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:11:34.274 06:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:34.274 06:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:34.274 06:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 79887 00:11:34.274 06:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 79887 ']' 00:11:34.274 06:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 79887 00:11:34.274 06:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:11:34.274 06:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:34.274 06:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79887 00:11:34.274 killing process with pid 79887 00:11:34.274 06:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:34.274 06:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:34.274 06:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79887' 00:11:34.274 06:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 79887 00:11:34.274 [2024-08-14 06:43:01.301206] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:34.274 06:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 79887 00:11:34.274 [2024-08-14 06:43:01.301349] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.274 [2024-08-14 06:43:01.301418] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.274 [2024-08-14 06:43:01.301440] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:11:34.274 [2024-08-14 06:43:01.336892] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.539 06:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:11:34.539 00:11:34.539 real 0m14.491s 00:11:34.539 user 0m26.544s 00:11:34.539 sys 0m2.061s 00:11:34.539 ************************************ 00:11:34.539 END TEST raid_superblock_test 00:11:34.539 ************************************ 00:11:34.539 06:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:34.539 06:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.539 06:43:01 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:34.539 06:43:01 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:34.539 06:43:01 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:34.539 06:43:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.539 ************************************ 00:11:34.539 START TEST raid_read_error_test 00:11:34.539 ************************************ 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 3 read 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.jACrQiYstm 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=80347 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 80347 /var/tmp/spdk-raid.sock 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 80347 ']' 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:34.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:34.539 06:43:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.539 [2024-08-14 06:43:01.764221] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:11:34.539 [2024-08-14 06:43:01.764546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80347 ] 00:11:34.799 [2024-08-14 06:43:01.903150] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.799 [2024-08-14 06:43:01.958392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.799 [2024-08-14 06:43:02.003059] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.799 [2024-08-14 06:43:02.003113] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.738 06:43:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:35.738 06:43:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:11:35.738 06:43:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:35.738 06:43:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:35.738 BaseBdev1_malloc 00:11:35.738 06:43:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:35.997 true 00:11:35.997 06:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:36.257 [2024-08-14 06:43:03.456771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:36.257 [2024-08-14 06:43:03.456948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.257 [2024-08-14 06:43:03.456978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:11:36.257 [2024-08-14 06:43:03.456994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.257 [2024-08-14 06:43:03.459539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.257 [2024-08-14 06:43:03.459644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:36.257 BaseBdev1 00:11:36.257 06:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:36.257 06:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:36.517 BaseBdev2_malloc 00:11:36.517 06:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:36.776 true 00:11:36.776 06:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:37.035 [2024-08-14 06:43:04.136861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:37.035 [2024-08-14 06:43:04.136958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.035 [2024-08-14 06:43:04.136983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:37.035 [2024-08-14 06:43:04.136995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.035 [2024-08-14 06:43:04.139431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.035 [2024-08-14 06:43:04.139473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:37.035 BaseBdev2 00:11:37.035 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:37.035 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:37.294 BaseBdev3_malloc 00:11:37.294 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:37.554 true 00:11:37.554 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:37.554 [2024-08-14 06:43:04.749327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:37.554 [2024-08-14 06:43:04.749497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.554 [2024-08-14 06:43:04.749526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:37.554 [2024-08-14 06:43:04.749537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.554 [2024-08-14 06:43:04.751860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.554 [2024-08-14 06:43:04.751909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:37.554 BaseBdev3 00:11:37.554 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:37.814 [2024-08-14 06:43:04.961012] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:37.814 [2024-08-14 06:43:04.962972] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.814 [2024-08-14 06:43:04.963043] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.814 [2024-08-14 06:43:04.963256] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:37.814 [2024-08-14 06:43:04.963270] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:37.814 [2024-08-14 06:43:04.963609] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:37.814 [2024-08-14 06:43:04.963758] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:37.814 [2024-08-14 06:43:04.963773] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:11:37.814 [2024-08-14 06:43:04.963932] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.814 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:37.814 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:37.814 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:37.814 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:37.814 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:37.814 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:37.814 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:37.814 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:37.814 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:37.814 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:37.814 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.814 06:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.074 06:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:38.074 "name": "raid_bdev1", 00:11:38.074 "uuid": "7f77035d-1ab4-44f7-90e5-96475c231f95", 00:11:38.074 "strip_size_kb": 64, 00:11:38.074 "state": "online", 00:11:38.074 "raid_level": "concat", 00:11:38.074 "superblock": true, 00:11:38.074 "num_base_bdevs": 3, 00:11:38.074 "num_base_bdevs_discovered": 3, 00:11:38.074 "num_base_bdevs_operational": 3, 00:11:38.074 "base_bdevs_list": [ 00:11:38.074 { 00:11:38.074 "name": "BaseBdev1", 00:11:38.074 "uuid": "2cb06358-e60c-596f-bc1e-1a8f168f0c23", 00:11:38.074 "is_configured": true, 00:11:38.074 "data_offset": 2048, 00:11:38.074 "data_size": 63488 00:11:38.074 }, 00:11:38.074 { 00:11:38.074 "name": "BaseBdev2", 00:11:38.074 "uuid": "629e533a-1b43-5822-b190-0897450d10ec", 00:11:38.074 "is_configured": true, 00:11:38.074 "data_offset": 2048, 00:11:38.074 "data_size": 63488 00:11:38.074 }, 00:11:38.074 { 00:11:38.074 "name": "BaseBdev3", 00:11:38.074 "uuid": "063414bb-5ea2-5cb8-83c1-e7f30ab8c3bb", 00:11:38.074 "is_configured": true, 00:11:38.074 "data_offset": 2048, 00:11:38.074 "data_size": 63488 00:11:38.074 } 00:11:38.074 ] 00:11:38.074 }' 00:11:38.074 06:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:38.074 06:43:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.642 06:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:38.642 06:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:11:38.642 [2024-08-14 06:43:05.787907] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:39.578 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:39.837 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:11:39.837 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:11:39.837 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:11:39.837 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:39.837 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:39.837 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:39.837 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:39.837 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:39.837 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:39.837 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.838 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.838 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.838 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.838 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.838 06:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.097 06:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:40.097 "name": "raid_bdev1", 00:11:40.097 "uuid": "7f77035d-1ab4-44f7-90e5-96475c231f95", 00:11:40.097 "strip_size_kb": 64, 00:11:40.097 "state": "online", 00:11:40.097 "raid_level": "concat", 00:11:40.097 "superblock": true, 00:11:40.097 "num_base_bdevs": 3, 00:11:40.097 "num_base_bdevs_discovered": 3, 00:11:40.097 "num_base_bdevs_operational": 3, 00:11:40.097 "base_bdevs_list": [ 00:11:40.097 { 00:11:40.097 "name": "BaseBdev1", 00:11:40.097 "uuid": "2cb06358-e60c-596f-bc1e-1a8f168f0c23", 00:11:40.097 "is_configured": true, 00:11:40.097 "data_offset": 2048, 00:11:40.097 "data_size": 63488 00:11:40.097 }, 00:11:40.097 { 00:11:40.097 "name": "BaseBdev2", 00:11:40.097 "uuid": "629e533a-1b43-5822-b190-0897450d10ec", 00:11:40.097 "is_configured": true, 00:11:40.097 "data_offset": 2048, 00:11:40.097 "data_size": 63488 00:11:40.097 }, 00:11:40.097 { 00:11:40.097 "name": "BaseBdev3", 00:11:40.097 "uuid": "063414bb-5ea2-5cb8-83c1-e7f30ab8c3bb", 00:11:40.097 "is_configured": true, 00:11:40.097 "data_offset": 2048, 00:11:40.097 "data_size": 63488 00:11:40.097 } 00:11:40.097 ] 00:11:40.097 }' 00:11:40.097 06:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:40.097 06:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.665 06:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:40.925 [2024-08-14 06:43:07.940408] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.925 [2024-08-14 06:43:07.940544] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.925 [2024-08-14 06:43:07.942982] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.925 [2024-08-14 06:43:07.943085] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.925 [2024-08-14 06:43:07.943144] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.925 [2024-08-14 06:43:07.943220] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:11:40.925 0 00:11:40.925 06:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 80347 00:11:40.925 06:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 80347 ']' 00:11:40.925 06:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 80347 00:11:40.925 06:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:11:40.925 06:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:40.925 06:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80347 00:11:40.925 killing process with pid 80347 00:11:40.925 06:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:40.925 06:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:40.925 06:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80347' 00:11:40.925 06:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 80347 00:11:40.925 [2024-08-14 06:43:07.989577] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.925 06:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 80347 00:11:40.925 [2024-08-14 06:43:08.015994] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.185 06:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:11:41.185 06:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.jACrQiYstm 00:11:41.185 06:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:11:41.185 06:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:11:41.185 06:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:11:41.185 06:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:41.185 06:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:41.185 06:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:11:41.185 00:11:41.185 real 0m6.601s 00:11:41.185 user 0m10.489s 00:11:41.185 sys 0m0.914s 00:11:41.185 06:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:41.185 06:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.185 ************************************ 00:11:41.185 END TEST raid_read_error_test 00:11:41.185 ************************************ 00:11:41.185 06:43:08 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:41.185 06:43:08 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:41.185 06:43:08 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:41.185 06:43:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.185 ************************************ 00:11:41.185 START TEST raid_write_error_test 00:11:41.185 ************************************ 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 3 write 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.xfHqJDHj0R 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=80527 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 80527 /var/tmp/spdk-raid.sock 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 80527 ']' 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:41.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:41.185 06:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.185 [2024-08-14 06:43:08.415001] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:11:41.185 [2024-08-14 06:43:08.415223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80527 ] 00:11:41.446 [2024-08-14 06:43:08.562033] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.446 [2024-08-14 06:43:08.613486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.446 [2024-08-14 06:43:08.655761] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.446 [2024-08-14 06:43:08.655881] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.014 06:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:42.015 06:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:11:42.015 06:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:42.015 06:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:42.273 BaseBdev1_malloc 00:11:42.273 06:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:42.533 true 00:11:42.533 06:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:42.793 [2024-08-14 06:43:09.904274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:42.793 [2024-08-14 06:43:09.904448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.793 [2024-08-14 06:43:09.904495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:11:42.793 [2024-08-14 06:43:09.904540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.793 [2024-08-14 06:43:09.906830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.793 [2024-08-14 06:43:09.906924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:42.793 BaseBdev1 00:11:42.793 06:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:42.793 06:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:43.053 BaseBdev2_malloc 00:11:43.053 06:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:43.312 true 00:11:43.312 06:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:43.312 [2024-08-14 06:43:10.532184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:43.312 [2024-08-14 06:43:10.532338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.312 [2024-08-14 06:43:10.532384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:43.312 [2024-08-14 06:43:10.532416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.312 [2024-08-14 06:43:10.534669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.312 [2024-08-14 06:43:10.534757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:43.312 BaseBdev2 00:11:43.313 06:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:43.313 06:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:43.573 BaseBdev3_malloc 00:11:43.573 06:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:43.833 true 00:11:43.833 06:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:44.093 [2024-08-14 06:43:11.197432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:44.093 [2024-08-14 06:43:11.197598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.093 [2024-08-14 06:43:11.197640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:44.093 [2024-08-14 06:43:11.197675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.093 [2024-08-14 06:43:11.199888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.093 [2024-08-14 06:43:11.199978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:44.093 BaseBdev3 00:11:44.093 06:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:44.352 [2024-08-14 06:43:11.433140] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:44.352 [2024-08-14 06:43:11.435159] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.352 [2024-08-14 06:43:11.435310] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.352 [2024-08-14 06:43:11.435521] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:44.352 [2024-08-14 06:43:11.435540] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:44.352 [2024-08-14 06:43:11.435862] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:44.352 [2024-08-14 06:43:11.436002] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:44.352 [2024-08-14 06:43:11.436021] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:11:44.352 [2024-08-14 06:43:11.436201] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.352 06:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:44.352 06:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:44.352 06:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:44.352 06:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:44.352 06:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:44.352 06:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:44.352 06:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:44.352 06:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:44.352 06:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:44.352 06:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:44.352 06:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.352 06:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.612 06:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:44.612 "name": "raid_bdev1", 00:11:44.612 "uuid": "89869da1-6443-4486-8923-1e89ef12afb6", 00:11:44.612 "strip_size_kb": 64, 00:11:44.612 "state": "online", 00:11:44.612 "raid_level": "concat", 00:11:44.612 "superblock": true, 00:11:44.612 "num_base_bdevs": 3, 00:11:44.612 "num_base_bdevs_discovered": 3, 00:11:44.612 "num_base_bdevs_operational": 3, 00:11:44.612 "base_bdevs_list": [ 00:11:44.612 { 00:11:44.612 "name": "BaseBdev1", 00:11:44.612 "uuid": "ec89b4ef-1065-5b98-af9f-e5f10fae5fdb", 00:11:44.612 "is_configured": true, 00:11:44.612 "data_offset": 2048, 00:11:44.612 "data_size": 63488 00:11:44.612 }, 00:11:44.612 { 00:11:44.612 "name": "BaseBdev2", 00:11:44.612 "uuid": "65066030-c25b-593f-bfb2-e953c7267ba9", 00:11:44.612 "is_configured": true, 00:11:44.612 "data_offset": 2048, 00:11:44.612 "data_size": 63488 00:11:44.612 }, 00:11:44.612 { 00:11:44.612 "name": "BaseBdev3", 00:11:44.612 "uuid": "6279e613-48a9-5a1a-9257-54c194abfc96", 00:11:44.612 "is_configured": true, 00:11:44.612 "data_offset": 2048, 00:11:44.612 "data_size": 63488 00:11:44.612 } 00:11:44.612 ] 00:11:44.612 }' 00:11:44.612 06:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:44.612 06:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.180 06:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:11:45.180 06:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:45.180 [2024-08-14 06:43:12.319969] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:46.131 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:46.390 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:11:46.390 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:11:46.390 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:11:46.390 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:46.390 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:46.390 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:46.390 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:46.390 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:46.390 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:46.390 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:46.390 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:46.390 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:46.390 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:46.390 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.390 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.648 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:46.648 "name": "raid_bdev1", 00:11:46.648 "uuid": "89869da1-6443-4486-8923-1e89ef12afb6", 00:11:46.648 "strip_size_kb": 64, 00:11:46.648 "state": "online", 00:11:46.648 "raid_level": "concat", 00:11:46.648 "superblock": true, 00:11:46.648 "num_base_bdevs": 3, 00:11:46.648 "num_base_bdevs_discovered": 3, 00:11:46.648 "num_base_bdevs_operational": 3, 00:11:46.648 "base_bdevs_list": [ 00:11:46.648 { 00:11:46.648 "name": "BaseBdev1", 00:11:46.648 "uuid": "ec89b4ef-1065-5b98-af9f-e5f10fae5fdb", 00:11:46.648 "is_configured": true, 00:11:46.648 "data_offset": 2048, 00:11:46.648 "data_size": 63488 00:11:46.648 }, 00:11:46.648 { 00:11:46.648 "name": "BaseBdev2", 00:11:46.648 "uuid": "65066030-c25b-593f-bfb2-e953c7267ba9", 00:11:46.648 "is_configured": true, 00:11:46.648 "data_offset": 2048, 00:11:46.648 "data_size": 63488 00:11:46.648 }, 00:11:46.648 { 00:11:46.649 "name": "BaseBdev3", 00:11:46.649 "uuid": "6279e613-48a9-5a1a-9257-54c194abfc96", 00:11:46.649 "is_configured": true, 00:11:46.649 "data_offset": 2048, 00:11:46.649 "data_size": 63488 00:11:46.649 } 00:11:46.649 ] 00:11:46.649 }' 00:11:46.649 06:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:46.649 06:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.217 06:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:47.476 [2024-08-14 06:43:14.485012] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.476 [2024-08-14 06:43:14.485152] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.476 [2024-08-14 06:43:14.487740] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.476 [2024-08-14 06:43:14.487832] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.476 [2024-08-14 06:43:14.487893] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.476 [2024-08-14 06:43:14.487936] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:11:47.476 0 00:11:47.476 06:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 80527 00:11:47.476 06:43:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 80527 ']' 00:11:47.476 06:43:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 80527 00:11:47.476 06:43:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:11:47.476 06:43:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:47.476 06:43:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80527 00:11:47.476 killing process with pid 80527 00:11:47.476 06:43:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:47.476 06:43:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:47.476 06:43:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80527' 00:11:47.476 06:43:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 80527 00:11:47.476 [2024-08-14 06:43:14.544501] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.476 06:43:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 80527 00:11:47.476 [2024-08-14 06:43:14.570447] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:47.737 06:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.xfHqJDHj0R 00:11:47.737 06:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:11:47.737 06:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:11:47.737 06:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:11:47.737 06:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:11:47.737 ************************************ 00:11:47.737 END TEST raid_write_error_test 00:11:47.737 ************************************ 00:11:47.737 06:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:47.737 06:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:47.737 06:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:11:47.737 00:11:47.737 real 0m6.488s 00:11:47.737 user 0m10.332s 00:11:47.737 sys 0m0.890s 00:11:47.737 06:43:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:47.737 06:43:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.737 06:43:14 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:11:47.737 06:43:14 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:47.737 06:43:14 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:47.737 06:43:14 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:47.737 06:43:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:47.737 ************************************ 00:11:47.737 START TEST raid_state_function_test 00:11:47.737 ************************************ 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 false 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=80701 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 80701' 00:11:47.737 Process raid pid: 80701 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 80701 /var/tmp/spdk-raid.sock 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 80701 ']' 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:47.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:47.737 06:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.737 [2024-08-14 06:43:14.970477] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:11:47.737 [2024-08-14 06:43:14.970587] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.997 [2024-08-14 06:43:15.117996] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.997 [2024-08-14 06:43:15.163986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.997 [2024-08-14 06:43:15.206215] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.997 [2024-08-14 06:43:15.206330] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.565 06:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:48.565 06:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:11:48.565 06:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:48.825 [2024-08-14 06:43:15.970003] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:48.825 [2024-08-14 06:43:15.970066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:48.825 [2024-08-14 06:43:15.970086] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:48.825 [2024-08-14 06:43:15.970095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:48.825 [2024-08-14 06:43:15.970104] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:48.825 [2024-08-14 06:43:15.970111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:48.825 06:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:48.825 06:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:48.825 06:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:48.825 06:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:48.825 06:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:48.825 06:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:48.825 06:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:48.825 06:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:48.825 06:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:48.825 06:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:48.825 06:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.825 06:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.084 06:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:49.084 "name": "Existed_Raid", 00:11:49.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.084 "strip_size_kb": 0, 00:11:49.084 "state": "configuring", 00:11:49.084 "raid_level": "raid1", 00:11:49.084 "superblock": false, 00:11:49.084 "num_base_bdevs": 3, 00:11:49.084 "num_base_bdevs_discovered": 0, 00:11:49.084 "num_base_bdevs_operational": 3, 00:11:49.084 "base_bdevs_list": [ 00:11:49.084 { 00:11:49.084 "name": "BaseBdev1", 00:11:49.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.084 "is_configured": false, 00:11:49.084 "data_offset": 0, 00:11:49.084 "data_size": 0 00:11:49.084 }, 00:11:49.084 { 00:11:49.084 "name": "BaseBdev2", 00:11:49.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.084 "is_configured": false, 00:11:49.084 "data_offset": 0, 00:11:49.084 "data_size": 0 00:11:49.084 }, 00:11:49.084 { 00:11:49.084 "name": "BaseBdev3", 00:11:49.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.084 "is_configured": false, 00:11:49.084 "data_offset": 0, 00:11:49.084 "data_size": 0 00:11:49.084 } 00:11:49.084 ] 00:11:49.084 }' 00:11:49.084 06:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:49.084 06:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.652 06:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:49.652 [2024-08-14 06:43:16.872363] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:49.652 [2024-08-14 06:43:16.872462] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:11:49.652 06:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:49.911 [2024-08-14 06:43:17.080050] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.911 [2024-08-14 06:43:17.080189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.911 [2024-08-14 06:43:17.080223] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:49.912 [2024-08-14 06:43:17.080245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:49.912 [2024-08-14 06:43:17.080265] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:49.912 [2024-08-14 06:43:17.080285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:49.912 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:50.170 [2024-08-14 06:43:17.268621] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.170 BaseBdev1 00:11:50.170 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:50.170 06:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:11:50.170 06:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:50.170 06:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:50.170 06:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:50.170 06:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:50.170 06:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:50.430 06:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:50.430 [ 00:11:50.430 { 00:11:50.430 "name": "BaseBdev1", 00:11:50.430 "aliases": [ 00:11:50.430 "8fd8a613-5559-4259-b0ef-61e8754f2c22" 00:11:50.430 ], 00:11:50.430 "product_name": "Malloc disk", 00:11:50.430 "block_size": 512, 00:11:50.430 "num_blocks": 65536, 00:11:50.430 "uuid": "8fd8a613-5559-4259-b0ef-61e8754f2c22", 00:11:50.430 "assigned_rate_limits": { 00:11:50.430 "rw_ios_per_sec": 0, 00:11:50.430 "rw_mbytes_per_sec": 0, 00:11:50.430 "r_mbytes_per_sec": 0, 00:11:50.430 "w_mbytes_per_sec": 0 00:11:50.430 }, 00:11:50.430 "claimed": true, 00:11:50.430 "claim_type": "exclusive_write", 00:11:50.430 "zoned": false, 00:11:50.430 "supported_io_types": { 00:11:50.430 "read": true, 00:11:50.430 "write": true, 00:11:50.430 "unmap": true, 00:11:50.430 "flush": true, 00:11:50.430 "reset": true, 00:11:50.430 "nvme_admin": false, 00:11:50.430 "nvme_io": false, 00:11:50.430 "nvme_io_md": false, 00:11:50.430 "write_zeroes": true, 00:11:50.430 "zcopy": true, 00:11:50.430 "get_zone_info": false, 00:11:50.430 "zone_management": false, 00:11:50.430 "zone_append": false, 00:11:50.430 "compare": false, 00:11:50.430 "compare_and_write": false, 00:11:50.430 "abort": true, 00:11:50.430 "seek_hole": false, 00:11:50.430 "seek_data": false, 00:11:50.430 "copy": true, 00:11:50.430 "nvme_iov_md": false 00:11:50.430 }, 00:11:50.430 "memory_domains": [ 00:11:50.430 { 00:11:50.430 "dma_device_id": "system", 00:11:50.430 "dma_device_type": 1 00:11:50.430 }, 00:11:50.430 { 00:11:50.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.430 "dma_device_type": 2 00:11:50.430 } 00:11:50.430 ], 00:11:50.430 "driver_specific": {} 00:11:50.430 } 00:11:50.430 ] 00:11:50.430 06:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:50.430 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:50.430 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:50.430 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:50.430 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:50.430 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:50.430 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:50.430 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:50.430 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:50.430 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:50.430 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:50.430 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.430 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.690 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:50.690 "name": "Existed_Raid", 00:11:50.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.690 "strip_size_kb": 0, 00:11:50.690 "state": "configuring", 00:11:50.690 "raid_level": "raid1", 00:11:50.690 "superblock": false, 00:11:50.690 "num_base_bdevs": 3, 00:11:50.690 "num_base_bdevs_discovered": 1, 00:11:50.690 "num_base_bdevs_operational": 3, 00:11:50.690 "base_bdevs_list": [ 00:11:50.690 { 00:11:50.690 "name": "BaseBdev1", 00:11:50.690 "uuid": "8fd8a613-5559-4259-b0ef-61e8754f2c22", 00:11:50.690 "is_configured": true, 00:11:50.690 "data_offset": 0, 00:11:50.690 "data_size": 65536 00:11:50.690 }, 00:11:50.690 { 00:11:50.690 "name": "BaseBdev2", 00:11:50.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.690 "is_configured": false, 00:11:50.690 "data_offset": 0, 00:11:50.690 "data_size": 0 00:11:50.690 }, 00:11:50.690 { 00:11:50.690 "name": "BaseBdev3", 00:11:50.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.690 "is_configured": false, 00:11:50.690 "data_offset": 0, 00:11:50.690 "data_size": 0 00:11:50.690 } 00:11:50.690 ] 00:11:50.690 }' 00:11:50.690 06:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:50.690 06:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.265 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:51.536 [2024-08-14 06:43:18.602411] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.536 [2024-08-14 06:43:18.602479] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:11:51.536 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:51.796 [2024-08-14 06:43:18.806130] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.796 [2024-08-14 06:43:18.807980] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.796 [2024-08-14 06:43:18.808024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.796 [2024-08-14 06:43:18.808036] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.796 [2024-08-14 06:43:18.808043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.796 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:51.796 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:51.796 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:51.796 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:51.796 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:51.796 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:51.796 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:51.796 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:51.796 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:51.796 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:51.796 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:51.796 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:51.796 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.796 06:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.796 06:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:51.796 "name": "Existed_Raid", 00:11:51.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.796 "strip_size_kb": 0, 00:11:51.796 "state": "configuring", 00:11:51.796 "raid_level": "raid1", 00:11:51.796 "superblock": false, 00:11:51.796 "num_base_bdevs": 3, 00:11:51.796 "num_base_bdevs_discovered": 1, 00:11:51.796 "num_base_bdevs_operational": 3, 00:11:51.796 "base_bdevs_list": [ 00:11:51.796 { 00:11:51.796 "name": "BaseBdev1", 00:11:51.796 "uuid": "8fd8a613-5559-4259-b0ef-61e8754f2c22", 00:11:51.796 "is_configured": true, 00:11:51.796 "data_offset": 0, 00:11:51.796 "data_size": 65536 00:11:51.796 }, 00:11:51.796 { 00:11:51.796 "name": "BaseBdev2", 00:11:51.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.796 "is_configured": false, 00:11:51.796 "data_offset": 0, 00:11:51.796 "data_size": 0 00:11:51.796 }, 00:11:51.796 { 00:11:51.796 "name": "BaseBdev3", 00:11:51.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.796 "is_configured": false, 00:11:51.796 "data_offset": 0, 00:11:51.796 "data_size": 0 00:11:51.796 } 00:11:51.796 ] 00:11:51.796 }' 00:11:51.796 06:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:51.796 06:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.365 06:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:52.625 [2024-08-14 06:43:19.775803] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.625 BaseBdev2 00:11:52.625 06:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:52.625 06:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:11:52.625 06:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:52.625 06:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:52.625 06:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:52.625 06:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:52.625 06:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:52.884 06:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:53.144 [ 00:11:53.144 { 00:11:53.144 "name": "BaseBdev2", 00:11:53.144 "aliases": [ 00:11:53.144 "831537ff-14ea-4901-847b-ff144df7daca" 00:11:53.144 ], 00:11:53.144 "product_name": "Malloc disk", 00:11:53.144 "block_size": 512, 00:11:53.144 "num_blocks": 65536, 00:11:53.144 "uuid": "831537ff-14ea-4901-847b-ff144df7daca", 00:11:53.144 "assigned_rate_limits": { 00:11:53.144 "rw_ios_per_sec": 0, 00:11:53.144 "rw_mbytes_per_sec": 0, 00:11:53.144 "r_mbytes_per_sec": 0, 00:11:53.145 "w_mbytes_per_sec": 0 00:11:53.145 }, 00:11:53.145 "claimed": true, 00:11:53.145 "claim_type": "exclusive_write", 00:11:53.145 "zoned": false, 00:11:53.145 "supported_io_types": { 00:11:53.145 "read": true, 00:11:53.145 "write": true, 00:11:53.145 "unmap": true, 00:11:53.145 "flush": true, 00:11:53.145 "reset": true, 00:11:53.145 "nvme_admin": false, 00:11:53.145 "nvme_io": false, 00:11:53.145 "nvme_io_md": false, 00:11:53.145 "write_zeroes": true, 00:11:53.145 "zcopy": true, 00:11:53.145 "get_zone_info": false, 00:11:53.145 "zone_management": false, 00:11:53.145 "zone_append": false, 00:11:53.145 "compare": false, 00:11:53.145 "compare_and_write": false, 00:11:53.145 "abort": true, 00:11:53.145 "seek_hole": false, 00:11:53.145 "seek_data": false, 00:11:53.145 "copy": true, 00:11:53.145 "nvme_iov_md": false 00:11:53.145 }, 00:11:53.145 "memory_domains": [ 00:11:53.145 { 00:11:53.145 "dma_device_id": "system", 00:11:53.145 "dma_device_type": 1 00:11:53.145 }, 00:11:53.145 { 00:11:53.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.145 "dma_device_type": 2 00:11:53.145 } 00:11:53.145 ], 00:11:53.145 "driver_specific": {} 00:11:53.145 } 00:11:53.145 ] 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:53.145 "name": "Existed_Raid", 00:11:53.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.145 "strip_size_kb": 0, 00:11:53.145 "state": "configuring", 00:11:53.145 "raid_level": "raid1", 00:11:53.145 "superblock": false, 00:11:53.145 "num_base_bdevs": 3, 00:11:53.145 "num_base_bdevs_discovered": 2, 00:11:53.145 "num_base_bdevs_operational": 3, 00:11:53.145 "base_bdevs_list": [ 00:11:53.145 { 00:11:53.145 "name": "BaseBdev1", 00:11:53.145 "uuid": "8fd8a613-5559-4259-b0ef-61e8754f2c22", 00:11:53.145 "is_configured": true, 00:11:53.145 "data_offset": 0, 00:11:53.145 "data_size": 65536 00:11:53.145 }, 00:11:53.145 { 00:11:53.145 "name": "BaseBdev2", 00:11:53.145 "uuid": "831537ff-14ea-4901-847b-ff144df7daca", 00:11:53.145 "is_configured": true, 00:11:53.145 "data_offset": 0, 00:11:53.145 "data_size": 65536 00:11:53.145 }, 00:11:53.145 { 00:11:53.145 "name": "BaseBdev3", 00:11:53.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.145 "is_configured": false, 00:11:53.145 "data_offset": 0, 00:11:53.145 "data_size": 0 00:11:53.145 } 00:11:53.145 ] 00:11:53.145 }' 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:53.145 06:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.714 06:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:53.972 [2024-08-14 06:43:21.124607] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.972 [2024-08-14 06:43:21.124689] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:11:53.972 [2024-08-14 06:43:21.124708] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:53.972 [2024-08-14 06:43:21.124964] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:53.972 [2024-08-14 06:43:21.125089] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:11:53.972 [2024-08-14 06:43:21.125100] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:11:53.972 [2024-08-14 06:43:21.125328] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.972 BaseBdev3 00:11:53.972 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:53.972 06:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:11:53.972 06:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:53.972 06:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:53.972 06:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:53.972 06:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:53.972 06:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:54.231 06:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:54.490 [ 00:11:54.490 { 00:11:54.490 "name": "BaseBdev3", 00:11:54.490 "aliases": [ 00:11:54.490 "aac0ddf8-318f-489e-b420-ab998d3a666f" 00:11:54.490 ], 00:11:54.490 "product_name": "Malloc disk", 00:11:54.490 "block_size": 512, 00:11:54.490 "num_blocks": 65536, 00:11:54.490 "uuid": "aac0ddf8-318f-489e-b420-ab998d3a666f", 00:11:54.490 "assigned_rate_limits": { 00:11:54.490 "rw_ios_per_sec": 0, 00:11:54.490 "rw_mbytes_per_sec": 0, 00:11:54.490 "r_mbytes_per_sec": 0, 00:11:54.490 "w_mbytes_per_sec": 0 00:11:54.490 }, 00:11:54.490 "claimed": true, 00:11:54.490 "claim_type": "exclusive_write", 00:11:54.490 "zoned": false, 00:11:54.490 "supported_io_types": { 00:11:54.490 "read": true, 00:11:54.490 "write": true, 00:11:54.490 "unmap": true, 00:11:54.490 "flush": true, 00:11:54.490 "reset": true, 00:11:54.490 "nvme_admin": false, 00:11:54.490 "nvme_io": false, 00:11:54.490 "nvme_io_md": false, 00:11:54.490 "write_zeroes": true, 00:11:54.490 "zcopy": true, 00:11:54.490 "get_zone_info": false, 00:11:54.490 "zone_management": false, 00:11:54.490 "zone_append": false, 00:11:54.490 "compare": false, 00:11:54.490 "compare_and_write": false, 00:11:54.490 "abort": true, 00:11:54.490 "seek_hole": false, 00:11:54.490 "seek_data": false, 00:11:54.490 "copy": true, 00:11:54.490 "nvme_iov_md": false 00:11:54.490 }, 00:11:54.490 "memory_domains": [ 00:11:54.490 { 00:11:54.490 "dma_device_id": "system", 00:11:54.490 "dma_device_type": 1 00:11:54.490 }, 00:11:54.490 { 00:11:54.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.490 "dma_device_type": 2 00:11:54.490 } 00:11:54.490 ], 00:11:54.491 "driver_specific": {} 00:11:54.491 } 00:11:54.491 ] 00:11:54.491 06:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:54.491 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:54.491 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:54.491 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:54.491 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:54.491 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:54.491 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:54.491 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:54.491 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:54.491 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:54.491 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:54.491 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:54.491 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:54.491 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.491 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.751 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:54.751 "name": "Existed_Raid", 00:11:54.751 "uuid": "d174d8d7-f0a6-4240-b4c5-8fe02a5fc6d0", 00:11:54.751 "strip_size_kb": 0, 00:11:54.751 "state": "online", 00:11:54.751 "raid_level": "raid1", 00:11:54.751 "superblock": false, 00:11:54.751 "num_base_bdevs": 3, 00:11:54.751 "num_base_bdevs_discovered": 3, 00:11:54.751 "num_base_bdevs_operational": 3, 00:11:54.751 "base_bdevs_list": [ 00:11:54.751 { 00:11:54.751 "name": "BaseBdev1", 00:11:54.751 "uuid": "8fd8a613-5559-4259-b0ef-61e8754f2c22", 00:11:54.751 "is_configured": true, 00:11:54.751 "data_offset": 0, 00:11:54.751 "data_size": 65536 00:11:54.751 }, 00:11:54.751 { 00:11:54.751 "name": "BaseBdev2", 00:11:54.751 "uuid": "831537ff-14ea-4901-847b-ff144df7daca", 00:11:54.751 "is_configured": true, 00:11:54.751 "data_offset": 0, 00:11:54.751 "data_size": 65536 00:11:54.751 }, 00:11:54.751 { 00:11:54.751 "name": "BaseBdev3", 00:11:54.751 "uuid": "aac0ddf8-318f-489e-b420-ab998d3a666f", 00:11:54.751 "is_configured": true, 00:11:54.751 "data_offset": 0, 00:11:54.751 "data_size": 65536 00:11:54.751 } 00:11:54.751 ] 00:11:54.751 }' 00:11:54.751 06:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:54.751 06:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.320 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:55.320 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:55.320 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:55.320 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:55.320 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:55.320 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:55.320 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:55.320 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:55.320 [2024-08-14 06:43:22.538580] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:55.320 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:55.320 "name": "Existed_Raid", 00:11:55.320 "aliases": [ 00:11:55.320 "d174d8d7-f0a6-4240-b4c5-8fe02a5fc6d0" 00:11:55.320 ], 00:11:55.320 "product_name": "Raid Volume", 00:11:55.320 "block_size": 512, 00:11:55.320 "num_blocks": 65536, 00:11:55.320 "uuid": "d174d8d7-f0a6-4240-b4c5-8fe02a5fc6d0", 00:11:55.320 "assigned_rate_limits": { 00:11:55.320 "rw_ios_per_sec": 0, 00:11:55.320 "rw_mbytes_per_sec": 0, 00:11:55.320 "r_mbytes_per_sec": 0, 00:11:55.320 "w_mbytes_per_sec": 0 00:11:55.320 }, 00:11:55.320 "claimed": false, 00:11:55.320 "zoned": false, 00:11:55.320 "supported_io_types": { 00:11:55.320 "read": true, 00:11:55.320 "write": true, 00:11:55.320 "unmap": false, 00:11:55.320 "flush": false, 00:11:55.320 "reset": true, 00:11:55.320 "nvme_admin": false, 00:11:55.320 "nvme_io": false, 00:11:55.320 "nvme_io_md": false, 00:11:55.320 "write_zeroes": true, 00:11:55.320 "zcopy": false, 00:11:55.320 "get_zone_info": false, 00:11:55.320 "zone_management": false, 00:11:55.320 "zone_append": false, 00:11:55.320 "compare": false, 00:11:55.320 "compare_and_write": false, 00:11:55.320 "abort": false, 00:11:55.320 "seek_hole": false, 00:11:55.320 "seek_data": false, 00:11:55.320 "copy": false, 00:11:55.320 "nvme_iov_md": false 00:11:55.320 }, 00:11:55.320 "memory_domains": [ 00:11:55.320 { 00:11:55.320 "dma_device_id": "system", 00:11:55.320 "dma_device_type": 1 00:11:55.320 }, 00:11:55.320 { 00:11:55.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.320 "dma_device_type": 2 00:11:55.320 }, 00:11:55.320 { 00:11:55.320 "dma_device_id": "system", 00:11:55.320 "dma_device_type": 1 00:11:55.320 }, 00:11:55.320 { 00:11:55.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.320 "dma_device_type": 2 00:11:55.320 }, 00:11:55.320 { 00:11:55.320 "dma_device_id": "system", 00:11:55.320 "dma_device_type": 1 00:11:55.320 }, 00:11:55.320 { 00:11:55.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.320 "dma_device_type": 2 00:11:55.320 } 00:11:55.320 ], 00:11:55.320 "driver_specific": { 00:11:55.320 "raid": { 00:11:55.320 "uuid": "d174d8d7-f0a6-4240-b4c5-8fe02a5fc6d0", 00:11:55.320 "strip_size_kb": 0, 00:11:55.320 "state": "online", 00:11:55.320 "raid_level": "raid1", 00:11:55.320 "superblock": false, 00:11:55.320 "num_base_bdevs": 3, 00:11:55.320 "num_base_bdevs_discovered": 3, 00:11:55.320 "num_base_bdevs_operational": 3, 00:11:55.320 "base_bdevs_list": [ 00:11:55.320 { 00:11:55.320 "name": "BaseBdev1", 00:11:55.320 "uuid": "8fd8a613-5559-4259-b0ef-61e8754f2c22", 00:11:55.320 "is_configured": true, 00:11:55.320 "data_offset": 0, 00:11:55.320 "data_size": 65536 00:11:55.320 }, 00:11:55.320 { 00:11:55.320 "name": "BaseBdev2", 00:11:55.320 "uuid": "831537ff-14ea-4901-847b-ff144df7daca", 00:11:55.320 "is_configured": true, 00:11:55.320 "data_offset": 0, 00:11:55.320 "data_size": 65536 00:11:55.320 }, 00:11:55.320 { 00:11:55.320 "name": "BaseBdev3", 00:11:55.320 "uuid": "aac0ddf8-318f-489e-b420-ab998d3a666f", 00:11:55.320 "is_configured": true, 00:11:55.320 "data_offset": 0, 00:11:55.320 "data_size": 65536 00:11:55.320 } 00:11:55.320 ] 00:11:55.320 } 00:11:55.320 } 00:11:55.320 }' 00:11:55.320 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:55.579 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:55.579 BaseBdev2 00:11:55.579 BaseBdev3' 00:11:55.579 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:55.579 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:55.579 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:55.579 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:55.579 "name": "BaseBdev1", 00:11:55.579 "aliases": [ 00:11:55.579 "8fd8a613-5559-4259-b0ef-61e8754f2c22" 00:11:55.579 ], 00:11:55.579 "product_name": "Malloc disk", 00:11:55.579 "block_size": 512, 00:11:55.579 "num_blocks": 65536, 00:11:55.579 "uuid": "8fd8a613-5559-4259-b0ef-61e8754f2c22", 00:11:55.579 "assigned_rate_limits": { 00:11:55.579 "rw_ios_per_sec": 0, 00:11:55.579 "rw_mbytes_per_sec": 0, 00:11:55.579 "r_mbytes_per_sec": 0, 00:11:55.579 "w_mbytes_per_sec": 0 00:11:55.579 }, 00:11:55.579 "claimed": true, 00:11:55.579 "claim_type": "exclusive_write", 00:11:55.579 "zoned": false, 00:11:55.579 "supported_io_types": { 00:11:55.579 "read": true, 00:11:55.579 "write": true, 00:11:55.579 "unmap": true, 00:11:55.579 "flush": true, 00:11:55.579 "reset": true, 00:11:55.579 "nvme_admin": false, 00:11:55.579 "nvme_io": false, 00:11:55.579 "nvme_io_md": false, 00:11:55.579 "write_zeroes": true, 00:11:55.579 "zcopy": true, 00:11:55.579 "get_zone_info": false, 00:11:55.579 "zone_management": false, 00:11:55.579 "zone_append": false, 00:11:55.579 "compare": false, 00:11:55.579 "compare_and_write": false, 00:11:55.579 "abort": true, 00:11:55.579 "seek_hole": false, 00:11:55.579 "seek_data": false, 00:11:55.579 "copy": true, 00:11:55.579 "nvme_iov_md": false 00:11:55.579 }, 00:11:55.579 "memory_domains": [ 00:11:55.579 { 00:11:55.579 "dma_device_id": "system", 00:11:55.579 "dma_device_type": 1 00:11:55.579 }, 00:11:55.579 { 00:11:55.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.579 "dma_device_type": 2 00:11:55.579 } 00:11:55.579 ], 00:11:55.579 "driver_specific": {} 00:11:55.579 }' 00:11:55.579 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:55.838 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:55.838 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:55.838 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:55.838 06:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:55.838 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:55.838 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:55.838 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.097 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:56.097 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.097 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.097 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:56.097 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:56.097 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:56.097 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:56.356 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:56.356 "name": "BaseBdev2", 00:11:56.356 "aliases": [ 00:11:56.356 "831537ff-14ea-4901-847b-ff144df7daca" 00:11:56.356 ], 00:11:56.356 "product_name": "Malloc disk", 00:11:56.356 "block_size": 512, 00:11:56.356 "num_blocks": 65536, 00:11:56.356 "uuid": "831537ff-14ea-4901-847b-ff144df7daca", 00:11:56.356 "assigned_rate_limits": { 00:11:56.356 "rw_ios_per_sec": 0, 00:11:56.356 "rw_mbytes_per_sec": 0, 00:11:56.356 "r_mbytes_per_sec": 0, 00:11:56.356 "w_mbytes_per_sec": 0 00:11:56.356 }, 00:11:56.356 "claimed": true, 00:11:56.356 "claim_type": "exclusive_write", 00:11:56.356 "zoned": false, 00:11:56.356 "supported_io_types": { 00:11:56.356 "read": true, 00:11:56.356 "write": true, 00:11:56.356 "unmap": true, 00:11:56.356 "flush": true, 00:11:56.356 "reset": true, 00:11:56.356 "nvme_admin": false, 00:11:56.356 "nvme_io": false, 00:11:56.356 "nvme_io_md": false, 00:11:56.356 "write_zeroes": true, 00:11:56.356 "zcopy": true, 00:11:56.356 "get_zone_info": false, 00:11:56.356 "zone_management": false, 00:11:56.356 "zone_append": false, 00:11:56.356 "compare": false, 00:11:56.356 "compare_and_write": false, 00:11:56.356 "abort": true, 00:11:56.356 "seek_hole": false, 00:11:56.356 "seek_data": false, 00:11:56.356 "copy": true, 00:11:56.356 "nvme_iov_md": false 00:11:56.356 }, 00:11:56.356 "memory_domains": [ 00:11:56.356 { 00:11:56.356 "dma_device_id": "system", 00:11:56.356 "dma_device_type": 1 00:11:56.356 }, 00:11:56.356 { 00:11:56.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.356 "dma_device_type": 2 00:11:56.356 } 00:11:56.356 ], 00:11:56.356 "driver_specific": {} 00:11:56.356 }' 00:11:56.356 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.356 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.356 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:56.357 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.357 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.357 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:56.357 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.616 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.616 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:56.616 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.616 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.616 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:56.616 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:56.616 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:56.616 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:56.875 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:56.875 "name": "BaseBdev3", 00:11:56.875 "aliases": [ 00:11:56.875 "aac0ddf8-318f-489e-b420-ab998d3a666f" 00:11:56.875 ], 00:11:56.875 "product_name": "Malloc disk", 00:11:56.875 "block_size": 512, 00:11:56.875 "num_blocks": 65536, 00:11:56.875 "uuid": "aac0ddf8-318f-489e-b420-ab998d3a666f", 00:11:56.875 "assigned_rate_limits": { 00:11:56.875 "rw_ios_per_sec": 0, 00:11:56.875 "rw_mbytes_per_sec": 0, 00:11:56.875 "r_mbytes_per_sec": 0, 00:11:56.875 "w_mbytes_per_sec": 0 00:11:56.875 }, 00:11:56.875 "claimed": true, 00:11:56.875 "claim_type": "exclusive_write", 00:11:56.875 "zoned": false, 00:11:56.875 "supported_io_types": { 00:11:56.875 "read": true, 00:11:56.875 "write": true, 00:11:56.875 "unmap": true, 00:11:56.875 "flush": true, 00:11:56.875 "reset": true, 00:11:56.875 "nvme_admin": false, 00:11:56.875 "nvme_io": false, 00:11:56.875 "nvme_io_md": false, 00:11:56.875 "write_zeroes": true, 00:11:56.875 "zcopy": true, 00:11:56.875 "get_zone_info": false, 00:11:56.875 "zone_management": false, 00:11:56.875 "zone_append": false, 00:11:56.875 "compare": false, 00:11:56.875 "compare_and_write": false, 00:11:56.875 "abort": true, 00:11:56.875 "seek_hole": false, 00:11:56.875 "seek_data": false, 00:11:56.875 "copy": true, 00:11:56.875 "nvme_iov_md": false 00:11:56.875 }, 00:11:56.875 "memory_domains": [ 00:11:56.875 { 00:11:56.875 "dma_device_id": "system", 00:11:56.875 "dma_device_type": 1 00:11:56.875 }, 00:11:56.875 { 00:11:56.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.875 "dma_device_type": 2 00:11:56.875 } 00:11:56.875 ], 00:11:56.875 "driver_specific": {} 00:11:56.875 }' 00:11:56.875 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.875 06:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.875 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:56.875 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.875 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.134 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:57.134 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.134 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.134 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:57.134 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.134 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.134 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:57.134 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:57.394 [2024-08-14 06:43:24.562902] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.394 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.661 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:57.661 "name": "Existed_Raid", 00:11:57.661 "uuid": "d174d8d7-f0a6-4240-b4c5-8fe02a5fc6d0", 00:11:57.661 "strip_size_kb": 0, 00:11:57.661 "state": "online", 00:11:57.661 "raid_level": "raid1", 00:11:57.661 "superblock": false, 00:11:57.661 "num_base_bdevs": 3, 00:11:57.661 "num_base_bdevs_discovered": 2, 00:11:57.661 "num_base_bdevs_operational": 2, 00:11:57.661 "base_bdevs_list": [ 00:11:57.661 { 00:11:57.661 "name": null, 00:11:57.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.661 "is_configured": false, 00:11:57.661 "data_offset": 0, 00:11:57.661 "data_size": 65536 00:11:57.661 }, 00:11:57.661 { 00:11:57.661 "name": "BaseBdev2", 00:11:57.661 "uuid": "831537ff-14ea-4901-847b-ff144df7daca", 00:11:57.661 "is_configured": true, 00:11:57.661 "data_offset": 0, 00:11:57.661 "data_size": 65536 00:11:57.661 }, 00:11:57.661 { 00:11:57.661 "name": "BaseBdev3", 00:11:57.661 "uuid": "aac0ddf8-318f-489e-b420-ab998d3a666f", 00:11:57.661 "is_configured": true, 00:11:57.661 "data_offset": 0, 00:11:57.661 "data_size": 65536 00:11:57.661 } 00:11:57.661 ] 00:11:57.661 }' 00:11:57.661 06:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:57.661 06:43:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.240 06:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:58.240 06:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:58.240 06:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.240 06:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:58.500 06:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:58.500 06:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.500 06:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:58.760 [2024-08-14 06:43:25.784450] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.760 06:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:58.760 06:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:58.760 06:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.760 06:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:59.020 06:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:59.020 06:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:59.020 06:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:59.020 [2024-08-14 06:43:26.219208] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:59.020 [2024-08-14 06:43:26.219412] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.020 [2024-08-14 06:43:26.230939] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.020 [2024-08-14 06:43:26.230987] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.020 [2024-08-14 06:43:26.231003] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:11:59.020 06:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:59.020 06:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:59.020 06:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.020 06:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:59.280 06:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:59.280 06:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:59.280 06:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:59.280 06:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:59.280 06:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:59.280 06:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:59.540 BaseBdev2 00:11:59.540 06:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:59.540 06:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:11:59.540 06:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:59.540 06:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:59.540 06:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:59.540 06:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:59.540 06:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:59.800 06:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:00.060 [ 00:12:00.060 { 00:12:00.060 "name": "BaseBdev2", 00:12:00.060 "aliases": [ 00:12:00.060 "8b884bac-e759-4f22-856b-b15ca299c0ac" 00:12:00.060 ], 00:12:00.060 "product_name": "Malloc disk", 00:12:00.060 "block_size": 512, 00:12:00.060 "num_blocks": 65536, 00:12:00.060 "uuid": "8b884bac-e759-4f22-856b-b15ca299c0ac", 00:12:00.060 "assigned_rate_limits": { 00:12:00.060 "rw_ios_per_sec": 0, 00:12:00.060 "rw_mbytes_per_sec": 0, 00:12:00.060 "r_mbytes_per_sec": 0, 00:12:00.060 "w_mbytes_per_sec": 0 00:12:00.060 }, 00:12:00.060 "claimed": false, 00:12:00.060 "zoned": false, 00:12:00.060 "supported_io_types": { 00:12:00.060 "read": true, 00:12:00.060 "write": true, 00:12:00.060 "unmap": true, 00:12:00.060 "flush": true, 00:12:00.060 "reset": true, 00:12:00.060 "nvme_admin": false, 00:12:00.060 "nvme_io": false, 00:12:00.060 "nvme_io_md": false, 00:12:00.060 "write_zeroes": true, 00:12:00.060 "zcopy": true, 00:12:00.060 "get_zone_info": false, 00:12:00.060 "zone_management": false, 00:12:00.060 "zone_append": false, 00:12:00.060 "compare": false, 00:12:00.060 "compare_and_write": false, 00:12:00.060 "abort": true, 00:12:00.060 "seek_hole": false, 00:12:00.060 "seek_data": false, 00:12:00.060 "copy": true, 00:12:00.060 "nvme_iov_md": false 00:12:00.060 }, 00:12:00.060 "memory_domains": [ 00:12:00.060 { 00:12:00.060 "dma_device_id": "system", 00:12:00.060 "dma_device_type": 1 00:12:00.060 }, 00:12:00.060 { 00:12:00.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.060 "dma_device_type": 2 00:12:00.060 } 00:12:00.060 ], 00:12:00.060 "driver_specific": {} 00:12:00.060 } 00:12:00.060 ] 00:12:00.060 06:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:00.060 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:00.060 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:00.060 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:00.320 BaseBdev3 00:12:00.320 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:00.320 06:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:12:00.320 06:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:00.320 06:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:00.320 06:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:00.320 06:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:00.320 06:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:00.320 06:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:00.580 [ 00:12:00.580 { 00:12:00.580 "name": "BaseBdev3", 00:12:00.580 "aliases": [ 00:12:00.580 "5208f0be-01c0-4edb-8566-1b5369eb1aef" 00:12:00.580 ], 00:12:00.580 "product_name": "Malloc disk", 00:12:00.580 "block_size": 512, 00:12:00.580 "num_blocks": 65536, 00:12:00.580 "uuid": "5208f0be-01c0-4edb-8566-1b5369eb1aef", 00:12:00.580 "assigned_rate_limits": { 00:12:00.580 "rw_ios_per_sec": 0, 00:12:00.580 "rw_mbytes_per_sec": 0, 00:12:00.580 "r_mbytes_per_sec": 0, 00:12:00.580 "w_mbytes_per_sec": 0 00:12:00.580 }, 00:12:00.580 "claimed": false, 00:12:00.580 "zoned": false, 00:12:00.580 "supported_io_types": { 00:12:00.580 "read": true, 00:12:00.580 "write": true, 00:12:00.580 "unmap": true, 00:12:00.580 "flush": true, 00:12:00.580 "reset": true, 00:12:00.580 "nvme_admin": false, 00:12:00.580 "nvme_io": false, 00:12:00.580 "nvme_io_md": false, 00:12:00.580 "write_zeroes": true, 00:12:00.580 "zcopy": true, 00:12:00.580 "get_zone_info": false, 00:12:00.580 "zone_management": false, 00:12:00.580 "zone_append": false, 00:12:00.580 "compare": false, 00:12:00.580 "compare_and_write": false, 00:12:00.580 "abort": true, 00:12:00.580 "seek_hole": false, 00:12:00.580 "seek_data": false, 00:12:00.580 "copy": true, 00:12:00.581 "nvme_iov_md": false 00:12:00.581 }, 00:12:00.581 "memory_domains": [ 00:12:00.581 { 00:12:00.581 "dma_device_id": "system", 00:12:00.581 "dma_device_type": 1 00:12:00.581 }, 00:12:00.581 { 00:12:00.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.581 "dma_device_type": 2 00:12:00.581 } 00:12:00.581 ], 00:12:00.581 "driver_specific": {} 00:12:00.581 } 00:12:00.581 ] 00:12:00.581 06:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:00.581 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:00.581 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:00.581 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:00.840 [2024-08-14 06:43:27.965537] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:00.840 [2024-08-14 06:43:27.965594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:00.840 [2024-08-14 06:43:27.965621] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.841 [2024-08-14 06:43:27.967470] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.841 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:00.841 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:00.841 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:00.841 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:00.841 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:00.841 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:00.841 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:00.841 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:00.841 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:00.841 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:00.841 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.841 06:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.100 06:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:01.100 "name": "Existed_Raid", 00:12:01.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.100 "strip_size_kb": 0, 00:12:01.100 "state": "configuring", 00:12:01.100 "raid_level": "raid1", 00:12:01.100 "superblock": false, 00:12:01.100 "num_base_bdevs": 3, 00:12:01.100 "num_base_bdevs_discovered": 2, 00:12:01.100 "num_base_bdevs_operational": 3, 00:12:01.101 "base_bdevs_list": [ 00:12:01.101 { 00:12:01.101 "name": "BaseBdev1", 00:12:01.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.101 "is_configured": false, 00:12:01.101 "data_offset": 0, 00:12:01.101 "data_size": 0 00:12:01.101 }, 00:12:01.101 { 00:12:01.101 "name": "BaseBdev2", 00:12:01.101 "uuid": "8b884bac-e759-4f22-856b-b15ca299c0ac", 00:12:01.101 "is_configured": true, 00:12:01.101 "data_offset": 0, 00:12:01.101 "data_size": 65536 00:12:01.101 }, 00:12:01.101 { 00:12:01.101 "name": "BaseBdev3", 00:12:01.101 "uuid": "5208f0be-01c0-4edb-8566-1b5369eb1aef", 00:12:01.101 "is_configured": true, 00:12:01.101 "data_offset": 0, 00:12:01.101 "data_size": 65536 00:12:01.101 } 00:12:01.101 ] 00:12:01.101 }' 00:12:01.101 06:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:01.101 06:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.670 06:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:01.930 [2024-08-14 06:43:28.983809] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:01.930 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:01.930 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:01.930 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:01.930 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:01.930 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:01.930 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:01.930 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:01.930 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:01.930 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:01.930 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:01.930 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.930 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.190 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:02.190 "name": "Existed_Raid", 00:12:02.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.190 "strip_size_kb": 0, 00:12:02.190 "state": "configuring", 00:12:02.190 "raid_level": "raid1", 00:12:02.190 "superblock": false, 00:12:02.190 "num_base_bdevs": 3, 00:12:02.190 "num_base_bdevs_discovered": 1, 00:12:02.190 "num_base_bdevs_operational": 3, 00:12:02.190 "base_bdevs_list": [ 00:12:02.190 { 00:12:02.190 "name": "BaseBdev1", 00:12:02.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.190 "is_configured": false, 00:12:02.190 "data_offset": 0, 00:12:02.190 "data_size": 0 00:12:02.190 }, 00:12:02.190 { 00:12:02.190 "name": null, 00:12:02.191 "uuid": "8b884bac-e759-4f22-856b-b15ca299c0ac", 00:12:02.191 "is_configured": false, 00:12:02.191 "data_offset": 0, 00:12:02.191 "data_size": 65536 00:12:02.191 }, 00:12:02.191 { 00:12:02.191 "name": "BaseBdev3", 00:12:02.191 "uuid": "5208f0be-01c0-4edb-8566-1b5369eb1aef", 00:12:02.191 "is_configured": true, 00:12:02.191 "data_offset": 0, 00:12:02.191 "data_size": 65536 00:12:02.191 } 00:12:02.191 ] 00:12:02.191 }' 00:12:02.191 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:02.191 06:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.760 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.760 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:02.760 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:02.760 06:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:03.020 [2024-08-14 06:43:30.200722] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.020 BaseBdev1 00:12:03.020 06:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:03.020 06:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:03.020 06:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:03.020 06:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:03.020 06:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:03.020 06:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:03.020 06:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:03.280 06:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:03.540 [ 00:12:03.540 { 00:12:03.540 "name": "BaseBdev1", 00:12:03.540 "aliases": [ 00:12:03.540 "00ca66f9-91f7-454a-8da5-1c0489e80d8b" 00:12:03.540 ], 00:12:03.540 "product_name": "Malloc disk", 00:12:03.540 "block_size": 512, 00:12:03.540 "num_blocks": 65536, 00:12:03.540 "uuid": "00ca66f9-91f7-454a-8da5-1c0489e80d8b", 00:12:03.540 "assigned_rate_limits": { 00:12:03.540 "rw_ios_per_sec": 0, 00:12:03.540 "rw_mbytes_per_sec": 0, 00:12:03.540 "r_mbytes_per_sec": 0, 00:12:03.540 "w_mbytes_per_sec": 0 00:12:03.540 }, 00:12:03.540 "claimed": true, 00:12:03.540 "claim_type": "exclusive_write", 00:12:03.540 "zoned": false, 00:12:03.540 "supported_io_types": { 00:12:03.540 "read": true, 00:12:03.540 "write": true, 00:12:03.540 "unmap": true, 00:12:03.540 "flush": true, 00:12:03.540 "reset": true, 00:12:03.540 "nvme_admin": false, 00:12:03.540 "nvme_io": false, 00:12:03.540 "nvme_io_md": false, 00:12:03.540 "write_zeroes": true, 00:12:03.540 "zcopy": true, 00:12:03.540 "get_zone_info": false, 00:12:03.540 "zone_management": false, 00:12:03.540 "zone_append": false, 00:12:03.540 "compare": false, 00:12:03.540 "compare_and_write": false, 00:12:03.540 "abort": true, 00:12:03.540 "seek_hole": false, 00:12:03.540 "seek_data": false, 00:12:03.540 "copy": true, 00:12:03.540 "nvme_iov_md": false 00:12:03.540 }, 00:12:03.540 "memory_domains": [ 00:12:03.540 { 00:12:03.540 "dma_device_id": "system", 00:12:03.540 "dma_device_type": 1 00:12:03.540 }, 00:12:03.540 { 00:12:03.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.540 "dma_device_type": 2 00:12:03.540 } 00:12:03.541 ], 00:12:03.541 "driver_specific": {} 00:12:03.541 } 00:12:03.541 ] 00:12:03.541 06:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:03.541 06:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:03.541 06:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:03.541 06:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:03.541 06:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:03.541 06:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:03.541 06:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:03.541 06:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:03.541 06:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:03.541 06:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:03.541 06:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:03.541 06:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.541 06:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.816 06:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:03.817 "name": "Existed_Raid", 00:12:03.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.817 "strip_size_kb": 0, 00:12:03.817 "state": "configuring", 00:12:03.817 "raid_level": "raid1", 00:12:03.817 "superblock": false, 00:12:03.817 "num_base_bdevs": 3, 00:12:03.817 "num_base_bdevs_discovered": 2, 00:12:03.817 "num_base_bdevs_operational": 3, 00:12:03.817 "base_bdevs_list": [ 00:12:03.817 { 00:12:03.817 "name": "BaseBdev1", 00:12:03.817 "uuid": "00ca66f9-91f7-454a-8da5-1c0489e80d8b", 00:12:03.817 "is_configured": true, 00:12:03.817 "data_offset": 0, 00:12:03.817 "data_size": 65536 00:12:03.817 }, 00:12:03.817 { 00:12:03.817 "name": null, 00:12:03.817 "uuid": "8b884bac-e759-4f22-856b-b15ca299c0ac", 00:12:03.817 "is_configured": false, 00:12:03.817 "data_offset": 0, 00:12:03.817 "data_size": 65536 00:12:03.817 }, 00:12:03.817 { 00:12:03.817 "name": "BaseBdev3", 00:12:03.817 "uuid": "5208f0be-01c0-4edb-8566-1b5369eb1aef", 00:12:03.817 "is_configured": true, 00:12:03.817 "data_offset": 0, 00:12:03.817 "data_size": 65536 00:12:03.817 } 00:12:03.817 ] 00:12:03.817 }' 00:12:03.817 06:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:03.817 06:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.399 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.399 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:04.659 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:04.659 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:04.659 [2024-08-14 06:43:31.865996] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:04.659 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:04.659 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:04.659 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:04.659 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:04.659 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:04.659 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:04.659 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:04.659 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:04.659 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:04.659 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:04.659 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.659 06:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.919 06:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:04.919 "name": "Existed_Raid", 00:12:04.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.919 "strip_size_kb": 0, 00:12:04.919 "state": "configuring", 00:12:04.919 "raid_level": "raid1", 00:12:04.919 "superblock": false, 00:12:04.919 "num_base_bdevs": 3, 00:12:04.919 "num_base_bdevs_discovered": 1, 00:12:04.919 "num_base_bdevs_operational": 3, 00:12:04.919 "base_bdevs_list": [ 00:12:04.919 { 00:12:04.919 "name": "BaseBdev1", 00:12:04.919 "uuid": "00ca66f9-91f7-454a-8da5-1c0489e80d8b", 00:12:04.919 "is_configured": true, 00:12:04.919 "data_offset": 0, 00:12:04.919 "data_size": 65536 00:12:04.919 }, 00:12:04.919 { 00:12:04.919 "name": null, 00:12:04.919 "uuid": "8b884bac-e759-4f22-856b-b15ca299c0ac", 00:12:04.919 "is_configured": false, 00:12:04.919 "data_offset": 0, 00:12:04.919 "data_size": 65536 00:12:04.919 }, 00:12:04.919 { 00:12:04.919 "name": null, 00:12:04.919 "uuid": "5208f0be-01c0-4edb-8566-1b5369eb1aef", 00:12:04.919 "is_configured": false, 00:12:04.920 "data_offset": 0, 00:12:04.920 "data_size": 65536 00:12:04.920 } 00:12:04.920 ] 00:12:04.920 }' 00:12:04.920 06:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:04.920 06:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.487 06:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.487 06:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:05.748 06:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:05.748 06:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:06.008 [2024-08-14 06:43:33.162211] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.008 06:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:06.008 06:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:06.008 06:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:06.008 06:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:06.008 06:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:06.008 06:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:06.008 06:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:06.008 06:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:06.008 06:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:06.008 06:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:06.008 06:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.008 06:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.266 06:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:06.266 "name": "Existed_Raid", 00:12:06.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.266 "strip_size_kb": 0, 00:12:06.266 "state": "configuring", 00:12:06.266 "raid_level": "raid1", 00:12:06.266 "superblock": false, 00:12:06.266 "num_base_bdevs": 3, 00:12:06.266 "num_base_bdevs_discovered": 2, 00:12:06.266 "num_base_bdevs_operational": 3, 00:12:06.266 "base_bdevs_list": [ 00:12:06.266 { 00:12:06.266 "name": "BaseBdev1", 00:12:06.266 "uuid": "00ca66f9-91f7-454a-8da5-1c0489e80d8b", 00:12:06.266 "is_configured": true, 00:12:06.266 "data_offset": 0, 00:12:06.266 "data_size": 65536 00:12:06.266 }, 00:12:06.267 { 00:12:06.267 "name": null, 00:12:06.267 "uuid": "8b884bac-e759-4f22-856b-b15ca299c0ac", 00:12:06.267 "is_configured": false, 00:12:06.267 "data_offset": 0, 00:12:06.267 "data_size": 65536 00:12:06.267 }, 00:12:06.267 { 00:12:06.267 "name": "BaseBdev3", 00:12:06.267 "uuid": "5208f0be-01c0-4edb-8566-1b5369eb1aef", 00:12:06.267 "is_configured": true, 00:12:06.267 "data_offset": 0, 00:12:06.267 "data_size": 65536 00:12:06.267 } 00:12:06.267 ] 00:12:06.267 }' 00:12:06.267 06:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:06.267 06:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.833 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:06.833 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.092 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:07.092 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:07.351 [2024-08-14 06:43:34.427305] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:07.351 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:07.351 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:07.351 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:07.351 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:07.351 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:07.351 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:07.351 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:07.351 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:07.351 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:07.351 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:07.351 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.351 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.610 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:07.610 "name": "Existed_Raid", 00:12:07.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.610 "strip_size_kb": 0, 00:12:07.610 "state": "configuring", 00:12:07.610 "raid_level": "raid1", 00:12:07.610 "superblock": false, 00:12:07.610 "num_base_bdevs": 3, 00:12:07.610 "num_base_bdevs_discovered": 1, 00:12:07.610 "num_base_bdevs_operational": 3, 00:12:07.610 "base_bdevs_list": [ 00:12:07.610 { 00:12:07.610 "name": null, 00:12:07.610 "uuid": "00ca66f9-91f7-454a-8da5-1c0489e80d8b", 00:12:07.610 "is_configured": false, 00:12:07.610 "data_offset": 0, 00:12:07.610 "data_size": 65536 00:12:07.610 }, 00:12:07.610 { 00:12:07.610 "name": null, 00:12:07.610 "uuid": "8b884bac-e759-4f22-856b-b15ca299c0ac", 00:12:07.610 "is_configured": false, 00:12:07.610 "data_offset": 0, 00:12:07.610 "data_size": 65536 00:12:07.610 }, 00:12:07.610 { 00:12:07.610 "name": "BaseBdev3", 00:12:07.610 "uuid": "5208f0be-01c0-4edb-8566-1b5369eb1aef", 00:12:07.610 "is_configured": true, 00:12:07.610 "data_offset": 0, 00:12:07.610 "data_size": 65536 00:12:07.610 } 00:12:07.610 ] 00:12:07.610 }' 00:12:07.610 06:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:07.611 06:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.178 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.178 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:08.437 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:08.437 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:08.697 [2024-08-14 06:43:35.700135] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.697 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:08.697 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:08.697 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:08.697 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:08.697 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:08.697 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:08.697 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:08.697 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:08.697 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:08.697 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:08.697 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.697 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.955 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:08.955 "name": "Existed_Raid", 00:12:08.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.955 "strip_size_kb": 0, 00:12:08.955 "state": "configuring", 00:12:08.955 "raid_level": "raid1", 00:12:08.955 "superblock": false, 00:12:08.955 "num_base_bdevs": 3, 00:12:08.955 "num_base_bdevs_discovered": 2, 00:12:08.955 "num_base_bdevs_operational": 3, 00:12:08.955 "base_bdevs_list": [ 00:12:08.955 { 00:12:08.955 "name": null, 00:12:08.955 "uuid": "00ca66f9-91f7-454a-8da5-1c0489e80d8b", 00:12:08.955 "is_configured": false, 00:12:08.955 "data_offset": 0, 00:12:08.955 "data_size": 65536 00:12:08.955 }, 00:12:08.955 { 00:12:08.955 "name": "BaseBdev2", 00:12:08.955 "uuid": "8b884bac-e759-4f22-856b-b15ca299c0ac", 00:12:08.955 "is_configured": true, 00:12:08.955 "data_offset": 0, 00:12:08.955 "data_size": 65536 00:12:08.955 }, 00:12:08.955 { 00:12:08.955 "name": "BaseBdev3", 00:12:08.955 "uuid": "5208f0be-01c0-4edb-8566-1b5369eb1aef", 00:12:08.955 "is_configured": true, 00:12:08.955 "data_offset": 0, 00:12:08.955 "data_size": 65536 00:12:08.955 } 00:12:08.955 ] 00:12:08.955 }' 00:12:08.955 06:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:08.955 06:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.523 06:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.523 06:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:09.782 06:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:09.782 06:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.782 06:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:10.043 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 00ca66f9-91f7-454a-8da5-1c0489e80d8b 00:12:10.043 [2024-08-14 06:43:37.271925] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:10.043 [2024-08-14 06:43:37.271981] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:12:10.043 [2024-08-14 06:43:37.271991] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:10.043 [2024-08-14 06:43:37.272271] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:10.043 [2024-08-14 06:43:37.272415] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:12:10.043 [2024-08-14 06:43:37.272432] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:12:10.043 [2024-08-14 06:43:37.272646] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.043 NewBaseBdev 00:12:10.043 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:10.043 06:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:12:10.043 06:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:10.043 06:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:10.043 06:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:10.043 06:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:10.043 06:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:10.303 06:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:10.563 [ 00:12:10.563 { 00:12:10.563 "name": "NewBaseBdev", 00:12:10.563 "aliases": [ 00:12:10.563 "00ca66f9-91f7-454a-8da5-1c0489e80d8b" 00:12:10.563 ], 00:12:10.563 "product_name": "Malloc disk", 00:12:10.563 "block_size": 512, 00:12:10.563 "num_blocks": 65536, 00:12:10.563 "uuid": "00ca66f9-91f7-454a-8da5-1c0489e80d8b", 00:12:10.563 "assigned_rate_limits": { 00:12:10.563 "rw_ios_per_sec": 0, 00:12:10.563 "rw_mbytes_per_sec": 0, 00:12:10.563 "r_mbytes_per_sec": 0, 00:12:10.563 "w_mbytes_per_sec": 0 00:12:10.563 }, 00:12:10.563 "claimed": true, 00:12:10.563 "claim_type": "exclusive_write", 00:12:10.563 "zoned": false, 00:12:10.563 "supported_io_types": { 00:12:10.563 "read": true, 00:12:10.563 "write": true, 00:12:10.563 "unmap": true, 00:12:10.563 "flush": true, 00:12:10.563 "reset": true, 00:12:10.563 "nvme_admin": false, 00:12:10.563 "nvme_io": false, 00:12:10.563 "nvme_io_md": false, 00:12:10.563 "write_zeroes": true, 00:12:10.563 "zcopy": true, 00:12:10.563 "get_zone_info": false, 00:12:10.563 "zone_management": false, 00:12:10.563 "zone_append": false, 00:12:10.563 "compare": false, 00:12:10.563 "compare_and_write": false, 00:12:10.563 "abort": true, 00:12:10.563 "seek_hole": false, 00:12:10.563 "seek_data": false, 00:12:10.563 "copy": true, 00:12:10.563 "nvme_iov_md": false 00:12:10.563 }, 00:12:10.563 "memory_domains": [ 00:12:10.563 { 00:12:10.563 "dma_device_id": "system", 00:12:10.563 "dma_device_type": 1 00:12:10.563 }, 00:12:10.563 { 00:12:10.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.563 "dma_device_type": 2 00:12:10.563 } 00:12:10.563 ], 00:12:10.563 "driver_specific": {} 00:12:10.563 } 00:12:10.563 ] 00:12:10.563 06:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:10.563 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:10.563 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:10.563 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:10.563 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:10.563 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:10.563 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:10.563 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:10.563 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:10.563 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:10.563 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:10.563 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.563 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.823 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:10.823 "name": "Existed_Raid", 00:12:10.823 "uuid": "c4d38f47-cbf6-4b25-8b70-502c38bca72f", 00:12:10.823 "strip_size_kb": 0, 00:12:10.823 "state": "online", 00:12:10.823 "raid_level": "raid1", 00:12:10.823 "superblock": false, 00:12:10.823 "num_base_bdevs": 3, 00:12:10.823 "num_base_bdevs_discovered": 3, 00:12:10.823 "num_base_bdevs_operational": 3, 00:12:10.823 "base_bdevs_list": [ 00:12:10.823 { 00:12:10.823 "name": "NewBaseBdev", 00:12:10.823 "uuid": "00ca66f9-91f7-454a-8da5-1c0489e80d8b", 00:12:10.823 "is_configured": true, 00:12:10.823 "data_offset": 0, 00:12:10.823 "data_size": 65536 00:12:10.823 }, 00:12:10.823 { 00:12:10.823 "name": "BaseBdev2", 00:12:10.823 "uuid": "8b884bac-e759-4f22-856b-b15ca299c0ac", 00:12:10.823 "is_configured": true, 00:12:10.823 "data_offset": 0, 00:12:10.823 "data_size": 65536 00:12:10.823 }, 00:12:10.823 { 00:12:10.823 "name": "BaseBdev3", 00:12:10.823 "uuid": "5208f0be-01c0-4edb-8566-1b5369eb1aef", 00:12:10.823 "is_configured": true, 00:12:10.823 "data_offset": 0, 00:12:10.823 "data_size": 65536 00:12:10.823 } 00:12:10.823 ] 00:12:10.823 }' 00:12:10.823 06:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:10.823 06:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.393 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:11.393 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:11.393 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:11.393 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:11.393 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:11.393 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:11.393 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:11.393 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:11.652 [2024-08-14 06:43:38.709887] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.652 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:11.652 "name": "Existed_Raid", 00:12:11.652 "aliases": [ 00:12:11.653 "c4d38f47-cbf6-4b25-8b70-502c38bca72f" 00:12:11.653 ], 00:12:11.653 "product_name": "Raid Volume", 00:12:11.653 "block_size": 512, 00:12:11.653 "num_blocks": 65536, 00:12:11.653 "uuid": "c4d38f47-cbf6-4b25-8b70-502c38bca72f", 00:12:11.653 "assigned_rate_limits": { 00:12:11.653 "rw_ios_per_sec": 0, 00:12:11.653 "rw_mbytes_per_sec": 0, 00:12:11.653 "r_mbytes_per_sec": 0, 00:12:11.653 "w_mbytes_per_sec": 0 00:12:11.653 }, 00:12:11.653 "claimed": false, 00:12:11.653 "zoned": false, 00:12:11.653 "supported_io_types": { 00:12:11.653 "read": true, 00:12:11.653 "write": true, 00:12:11.653 "unmap": false, 00:12:11.653 "flush": false, 00:12:11.653 "reset": true, 00:12:11.653 "nvme_admin": false, 00:12:11.653 "nvme_io": false, 00:12:11.653 "nvme_io_md": false, 00:12:11.653 "write_zeroes": true, 00:12:11.653 "zcopy": false, 00:12:11.653 "get_zone_info": false, 00:12:11.653 "zone_management": false, 00:12:11.653 "zone_append": false, 00:12:11.653 "compare": false, 00:12:11.653 "compare_and_write": false, 00:12:11.653 "abort": false, 00:12:11.653 "seek_hole": false, 00:12:11.653 "seek_data": false, 00:12:11.653 "copy": false, 00:12:11.653 "nvme_iov_md": false 00:12:11.653 }, 00:12:11.653 "memory_domains": [ 00:12:11.653 { 00:12:11.653 "dma_device_id": "system", 00:12:11.653 "dma_device_type": 1 00:12:11.653 }, 00:12:11.653 { 00:12:11.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.653 "dma_device_type": 2 00:12:11.653 }, 00:12:11.653 { 00:12:11.653 "dma_device_id": "system", 00:12:11.653 "dma_device_type": 1 00:12:11.653 }, 00:12:11.653 { 00:12:11.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.653 "dma_device_type": 2 00:12:11.653 }, 00:12:11.653 { 00:12:11.653 "dma_device_id": "system", 00:12:11.653 "dma_device_type": 1 00:12:11.653 }, 00:12:11.653 { 00:12:11.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.653 "dma_device_type": 2 00:12:11.653 } 00:12:11.653 ], 00:12:11.653 "driver_specific": { 00:12:11.653 "raid": { 00:12:11.653 "uuid": "c4d38f47-cbf6-4b25-8b70-502c38bca72f", 00:12:11.653 "strip_size_kb": 0, 00:12:11.653 "state": "online", 00:12:11.653 "raid_level": "raid1", 00:12:11.653 "superblock": false, 00:12:11.653 "num_base_bdevs": 3, 00:12:11.653 "num_base_bdevs_discovered": 3, 00:12:11.653 "num_base_bdevs_operational": 3, 00:12:11.653 "base_bdevs_list": [ 00:12:11.653 { 00:12:11.653 "name": "NewBaseBdev", 00:12:11.653 "uuid": "00ca66f9-91f7-454a-8da5-1c0489e80d8b", 00:12:11.653 "is_configured": true, 00:12:11.653 "data_offset": 0, 00:12:11.653 "data_size": 65536 00:12:11.653 }, 00:12:11.653 { 00:12:11.653 "name": "BaseBdev2", 00:12:11.653 "uuid": "8b884bac-e759-4f22-856b-b15ca299c0ac", 00:12:11.653 "is_configured": true, 00:12:11.653 "data_offset": 0, 00:12:11.653 "data_size": 65536 00:12:11.653 }, 00:12:11.653 { 00:12:11.653 "name": "BaseBdev3", 00:12:11.653 "uuid": "5208f0be-01c0-4edb-8566-1b5369eb1aef", 00:12:11.653 "is_configured": true, 00:12:11.653 "data_offset": 0, 00:12:11.653 "data_size": 65536 00:12:11.653 } 00:12:11.653 ] 00:12:11.653 } 00:12:11.653 } 00:12:11.653 }' 00:12:11.653 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:11.653 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:11.653 BaseBdev2 00:12:11.653 BaseBdev3' 00:12:11.653 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:11.653 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:11.653 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:11.913 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:11.913 "name": "NewBaseBdev", 00:12:11.913 "aliases": [ 00:12:11.913 "00ca66f9-91f7-454a-8da5-1c0489e80d8b" 00:12:11.913 ], 00:12:11.913 "product_name": "Malloc disk", 00:12:11.913 "block_size": 512, 00:12:11.913 "num_blocks": 65536, 00:12:11.913 "uuid": "00ca66f9-91f7-454a-8da5-1c0489e80d8b", 00:12:11.913 "assigned_rate_limits": { 00:12:11.913 "rw_ios_per_sec": 0, 00:12:11.913 "rw_mbytes_per_sec": 0, 00:12:11.913 "r_mbytes_per_sec": 0, 00:12:11.913 "w_mbytes_per_sec": 0 00:12:11.913 }, 00:12:11.913 "claimed": true, 00:12:11.913 "claim_type": "exclusive_write", 00:12:11.913 "zoned": false, 00:12:11.913 "supported_io_types": { 00:12:11.913 "read": true, 00:12:11.913 "write": true, 00:12:11.913 "unmap": true, 00:12:11.913 "flush": true, 00:12:11.913 "reset": true, 00:12:11.913 "nvme_admin": false, 00:12:11.913 "nvme_io": false, 00:12:11.913 "nvme_io_md": false, 00:12:11.913 "write_zeroes": true, 00:12:11.913 "zcopy": true, 00:12:11.913 "get_zone_info": false, 00:12:11.913 "zone_management": false, 00:12:11.913 "zone_append": false, 00:12:11.913 "compare": false, 00:12:11.913 "compare_and_write": false, 00:12:11.913 "abort": true, 00:12:11.913 "seek_hole": false, 00:12:11.913 "seek_data": false, 00:12:11.913 "copy": true, 00:12:11.913 "nvme_iov_md": false 00:12:11.913 }, 00:12:11.913 "memory_domains": [ 00:12:11.913 { 00:12:11.913 "dma_device_id": "system", 00:12:11.913 "dma_device_type": 1 00:12:11.913 }, 00:12:11.913 { 00:12:11.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.913 "dma_device_type": 2 00:12:11.913 } 00:12:11.913 ], 00:12:11.913 "driver_specific": {} 00:12:11.913 }' 00:12:11.913 06:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:11.913 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:11.913 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:11.913 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:11.913 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:11.913 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:11.913 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:12.173 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:12.173 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:12.173 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:12.173 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:12.173 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:12.173 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:12.173 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:12.173 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:12.433 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:12.433 "name": "BaseBdev2", 00:12:12.433 "aliases": [ 00:12:12.433 "8b884bac-e759-4f22-856b-b15ca299c0ac" 00:12:12.433 ], 00:12:12.433 "product_name": "Malloc disk", 00:12:12.433 "block_size": 512, 00:12:12.433 "num_blocks": 65536, 00:12:12.433 "uuid": "8b884bac-e759-4f22-856b-b15ca299c0ac", 00:12:12.433 "assigned_rate_limits": { 00:12:12.433 "rw_ios_per_sec": 0, 00:12:12.433 "rw_mbytes_per_sec": 0, 00:12:12.433 "r_mbytes_per_sec": 0, 00:12:12.433 "w_mbytes_per_sec": 0 00:12:12.433 }, 00:12:12.433 "claimed": true, 00:12:12.433 "claim_type": "exclusive_write", 00:12:12.433 "zoned": false, 00:12:12.433 "supported_io_types": { 00:12:12.433 "read": true, 00:12:12.433 "write": true, 00:12:12.433 "unmap": true, 00:12:12.433 "flush": true, 00:12:12.433 "reset": true, 00:12:12.433 "nvme_admin": false, 00:12:12.433 "nvme_io": false, 00:12:12.433 "nvme_io_md": false, 00:12:12.433 "write_zeroes": true, 00:12:12.433 "zcopy": true, 00:12:12.433 "get_zone_info": false, 00:12:12.433 "zone_management": false, 00:12:12.433 "zone_append": false, 00:12:12.433 "compare": false, 00:12:12.433 "compare_and_write": false, 00:12:12.433 "abort": true, 00:12:12.433 "seek_hole": false, 00:12:12.433 "seek_data": false, 00:12:12.433 "copy": true, 00:12:12.433 "nvme_iov_md": false 00:12:12.433 }, 00:12:12.433 "memory_domains": [ 00:12:12.433 { 00:12:12.433 "dma_device_id": "system", 00:12:12.433 "dma_device_type": 1 00:12:12.433 }, 00:12:12.433 { 00:12:12.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.433 "dma_device_type": 2 00:12:12.433 } 00:12:12.433 ], 00:12:12.433 "driver_specific": {} 00:12:12.433 }' 00:12:12.433 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:12.433 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:12.433 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:12.433 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:12.433 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:12.693 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:12.693 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:12.693 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:12.693 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:12.693 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:12.693 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:12.693 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:12.693 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:12.693 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:12.693 06:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:12.953 06:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:12.953 "name": "BaseBdev3", 00:12:12.953 "aliases": [ 00:12:12.953 "5208f0be-01c0-4edb-8566-1b5369eb1aef" 00:12:12.953 ], 00:12:12.953 "product_name": "Malloc disk", 00:12:12.953 "block_size": 512, 00:12:12.953 "num_blocks": 65536, 00:12:12.953 "uuid": "5208f0be-01c0-4edb-8566-1b5369eb1aef", 00:12:12.953 "assigned_rate_limits": { 00:12:12.953 "rw_ios_per_sec": 0, 00:12:12.953 "rw_mbytes_per_sec": 0, 00:12:12.953 "r_mbytes_per_sec": 0, 00:12:12.953 "w_mbytes_per_sec": 0 00:12:12.953 }, 00:12:12.953 "claimed": true, 00:12:12.953 "claim_type": "exclusive_write", 00:12:12.953 "zoned": false, 00:12:12.953 "supported_io_types": { 00:12:12.953 "read": true, 00:12:12.953 "write": true, 00:12:12.953 "unmap": true, 00:12:12.953 "flush": true, 00:12:12.953 "reset": true, 00:12:12.953 "nvme_admin": false, 00:12:12.953 "nvme_io": false, 00:12:12.953 "nvme_io_md": false, 00:12:12.953 "write_zeroes": true, 00:12:12.953 "zcopy": true, 00:12:12.953 "get_zone_info": false, 00:12:12.953 "zone_management": false, 00:12:12.953 "zone_append": false, 00:12:12.953 "compare": false, 00:12:12.953 "compare_and_write": false, 00:12:12.953 "abort": true, 00:12:12.953 "seek_hole": false, 00:12:12.953 "seek_data": false, 00:12:12.953 "copy": true, 00:12:12.953 "nvme_iov_md": false 00:12:12.953 }, 00:12:12.953 "memory_domains": [ 00:12:12.953 { 00:12:12.953 "dma_device_id": "system", 00:12:12.953 "dma_device_type": 1 00:12:12.953 }, 00:12:12.953 { 00:12:12.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.953 "dma_device_type": 2 00:12:12.953 } 00:12:12.953 ], 00:12:12.953 "driver_specific": {} 00:12:12.953 }' 00:12:12.953 06:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:13.212 06:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:13.212 06:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:13.212 06:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:13.212 06:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:13.212 06:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:13.212 06:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:13.212 06:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:13.212 06:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:13.472 06:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:13.472 06:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:13.472 06:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:13.472 06:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:13.732 [2024-08-14 06:43:40.730242] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.732 [2024-08-14 06:43:40.730278] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.732 [2024-08-14 06:43:40.730377] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.732 [2024-08-14 06:43:40.730661] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.732 [2024-08-14 06:43:40.730677] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:12:13.732 06:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 80701 00:12:13.732 06:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 80701 ']' 00:12:13.732 06:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 80701 00:12:13.732 06:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:12:13.732 06:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:13.732 06:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80701 00:12:13.732 06:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:13.732 06:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:13.732 06:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80701' 00:12:13.732 killing process with pid 80701 00:12:13.732 06:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 80701 00:12:13.732 [2024-08-14 06:43:40.777380] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:13.732 06:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 80701 00:12:13.732 [2024-08-14 06:43:40.807861] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:13.993 00:12:13.993 real 0m26.169s 00:12:13.993 user 0m48.761s 00:12:13.993 sys 0m3.836s 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:13.993 ************************************ 00:12:13.993 END TEST raid_state_function_test 00:12:13.993 ************************************ 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.993 06:43:41 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:12:13.993 06:43:41 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:13.993 06:43:41 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:13.993 06:43:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.993 ************************************ 00:12:13.993 START TEST raid_state_function_test_sb 00:12:13.993 ************************************ 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 true 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=81615 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 81615' 00:12:13.993 Process raid pid: 81615 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 81615 /var/tmp/spdk-raid.sock 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 81615 ']' 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:13.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:13.993 06:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.993 [2024-08-14 06:43:41.206428] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:12:13.993 [2024-08-14 06:43:41.206559] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.254 [2024-08-14 06:43:41.351826] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.254 [2024-08-14 06:43:41.396295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.254 [2024-08-14 06:43:41.438091] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.254 [2024-08-14 06:43:41.438125] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.822 06:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:14.822 06:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:12:14.822 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:15.083 [2024-08-14 06:43:42.237376] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:15.083 [2024-08-14 06:43:42.237508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:15.083 [2024-08-14 06:43:42.237544] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:15.083 [2024-08-14 06:43:42.237554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:15.083 [2024-08-14 06:43:42.237568] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:15.083 [2024-08-14 06:43:42.237577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:15.083 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:15.083 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:15.083 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:15.083 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:15.083 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:15.083 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:15.083 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:15.083 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:15.083 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:15.083 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:15.084 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.084 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.343 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:15.343 "name": "Existed_Raid", 00:12:15.343 "uuid": "94dc50f7-6a2d-49cb-8e2b-804b5183bae9", 00:12:15.343 "strip_size_kb": 0, 00:12:15.343 "state": "configuring", 00:12:15.343 "raid_level": "raid1", 00:12:15.343 "superblock": true, 00:12:15.344 "num_base_bdevs": 3, 00:12:15.344 "num_base_bdevs_discovered": 0, 00:12:15.344 "num_base_bdevs_operational": 3, 00:12:15.344 "base_bdevs_list": [ 00:12:15.344 { 00:12:15.344 "name": "BaseBdev1", 00:12:15.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.344 "is_configured": false, 00:12:15.344 "data_offset": 0, 00:12:15.344 "data_size": 0 00:12:15.344 }, 00:12:15.344 { 00:12:15.344 "name": "BaseBdev2", 00:12:15.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.344 "is_configured": false, 00:12:15.344 "data_offset": 0, 00:12:15.344 "data_size": 0 00:12:15.344 }, 00:12:15.344 { 00:12:15.344 "name": "BaseBdev3", 00:12:15.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.344 "is_configured": false, 00:12:15.344 "data_offset": 0, 00:12:15.344 "data_size": 0 00:12:15.344 } 00:12:15.344 ] 00:12:15.344 }' 00:12:15.344 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:15.344 06:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.913 06:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:16.173 [2024-08-14 06:43:43.171632] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:16.173 [2024-08-14 06:43:43.171672] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:12:16.173 06:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:16.173 [2024-08-14 06:43:43.391288] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:16.173 [2024-08-14 06:43:43.391331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:16.173 [2024-08-14 06:43:43.391342] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:16.173 [2024-08-14 06:43:43.391349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:16.173 [2024-08-14 06:43:43.391357] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:16.173 [2024-08-14 06:43:43.391364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:16.173 06:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:16.433 [2024-08-14 06:43:43.607843] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.433 BaseBdev1 00:12:16.433 06:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:16.433 06:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:16.433 06:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:16.433 06:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:16.433 06:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:16.433 06:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:16.433 06:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:16.691 06:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:16.950 [ 00:12:16.950 { 00:12:16.950 "name": "BaseBdev1", 00:12:16.950 "aliases": [ 00:12:16.950 "c71b42f7-ec7b-4877-a200-65342713efb5" 00:12:16.950 ], 00:12:16.950 "product_name": "Malloc disk", 00:12:16.950 "block_size": 512, 00:12:16.950 "num_blocks": 65536, 00:12:16.950 "uuid": "c71b42f7-ec7b-4877-a200-65342713efb5", 00:12:16.950 "assigned_rate_limits": { 00:12:16.950 "rw_ios_per_sec": 0, 00:12:16.950 "rw_mbytes_per_sec": 0, 00:12:16.950 "r_mbytes_per_sec": 0, 00:12:16.950 "w_mbytes_per_sec": 0 00:12:16.950 }, 00:12:16.950 "claimed": true, 00:12:16.950 "claim_type": "exclusive_write", 00:12:16.950 "zoned": false, 00:12:16.950 "supported_io_types": { 00:12:16.950 "read": true, 00:12:16.950 "write": true, 00:12:16.950 "unmap": true, 00:12:16.950 "flush": true, 00:12:16.950 "reset": true, 00:12:16.950 "nvme_admin": false, 00:12:16.950 "nvme_io": false, 00:12:16.950 "nvme_io_md": false, 00:12:16.950 "write_zeroes": true, 00:12:16.950 "zcopy": true, 00:12:16.950 "get_zone_info": false, 00:12:16.950 "zone_management": false, 00:12:16.950 "zone_append": false, 00:12:16.950 "compare": false, 00:12:16.950 "compare_and_write": false, 00:12:16.950 "abort": true, 00:12:16.950 "seek_hole": false, 00:12:16.950 "seek_data": false, 00:12:16.950 "copy": true, 00:12:16.950 "nvme_iov_md": false 00:12:16.950 }, 00:12:16.950 "memory_domains": [ 00:12:16.950 { 00:12:16.950 "dma_device_id": "system", 00:12:16.950 "dma_device_type": 1 00:12:16.950 }, 00:12:16.950 { 00:12:16.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.950 "dma_device_type": 2 00:12:16.950 } 00:12:16.950 ], 00:12:16.950 "driver_specific": {} 00:12:16.950 } 00:12:16.950 ] 00:12:16.950 06:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:16.950 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:16.950 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:16.950 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:16.950 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:16.950 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:16.950 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:16.950 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:16.950 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:16.950 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:16.950 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:16.950 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.950 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.209 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:17.209 "name": "Existed_Raid", 00:12:17.209 "uuid": "9aeb7910-7d4c-4391-b28d-084e1116503c", 00:12:17.209 "strip_size_kb": 0, 00:12:17.209 "state": "configuring", 00:12:17.209 "raid_level": "raid1", 00:12:17.209 "superblock": true, 00:12:17.209 "num_base_bdevs": 3, 00:12:17.209 "num_base_bdevs_discovered": 1, 00:12:17.209 "num_base_bdevs_operational": 3, 00:12:17.209 "base_bdevs_list": [ 00:12:17.209 { 00:12:17.209 "name": "BaseBdev1", 00:12:17.209 "uuid": "c71b42f7-ec7b-4877-a200-65342713efb5", 00:12:17.209 "is_configured": true, 00:12:17.209 "data_offset": 2048, 00:12:17.209 "data_size": 63488 00:12:17.209 }, 00:12:17.209 { 00:12:17.209 "name": "BaseBdev2", 00:12:17.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.209 "is_configured": false, 00:12:17.209 "data_offset": 0, 00:12:17.209 "data_size": 0 00:12:17.209 }, 00:12:17.209 { 00:12:17.209 "name": "BaseBdev3", 00:12:17.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.209 "is_configured": false, 00:12:17.209 "data_offset": 0, 00:12:17.209 "data_size": 0 00:12:17.209 } 00:12:17.209 ] 00:12:17.209 }' 00:12:17.209 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:17.209 06:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.776 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:17.776 [2024-08-14 06:43:44.953777] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:17.776 [2024-08-14 06:43:44.953881] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:12:17.776 06:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:18.035 [2024-08-14 06:43:45.161525] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.035 [2024-08-14 06:43:45.163834] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:18.035 [2024-08-14 06:43:45.163888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:18.035 [2024-08-14 06:43:45.163902] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:18.035 [2024-08-14 06:43:45.163910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:18.035 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:18.035 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:18.035 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:18.035 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:18.035 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:18.035 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:18.035 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:18.035 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:18.035 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:18.035 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:18.035 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:18.035 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:18.035 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.035 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.293 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:18.293 "name": "Existed_Raid", 00:12:18.293 "uuid": "8f69c828-685b-4e04-b6af-7550406f27f1", 00:12:18.293 "strip_size_kb": 0, 00:12:18.293 "state": "configuring", 00:12:18.293 "raid_level": "raid1", 00:12:18.293 "superblock": true, 00:12:18.293 "num_base_bdevs": 3, 00:12:18.293 "num_base_bdevs_discovered": 1, 00:12:18.293 "num_base_bdevs_operational": 3, 00:12:18.293 "base_bdevs_list": [ 00:12:18.293 { 00:12:18.293 "name": "BaseBdev1", 00:12:18.293 "uuid": "c71b42f7-ec7b-4877-a200-65342713efb5", 00:12:18.293 "is_configured": true, 00:12:18.293 "data_offset": 2048, 00:12:18.293 "data_size": 63488 00:12:18.293 }, 00:12:18.293 { 00:12:18.293 "name": "BaseBdev2", 00:12:18.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.293 "is_configured": false, 00:12:18.293 "data_offset": 0, 00:12:18.293 "data_size": 0 00:12:18.293 }, 00:12:18.293 { 00:12:18.293 "name": "BaseBdev3", 00:12:18.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.293 "is_configured": false, 00:12:18.293 "data_offset": 0, 00:12:18.293 "data_size": 0 00:12:18.293 } 00:12:18.293 ] 00:12:18.293 }' 00:12:18.293 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:18.293 06:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.859 06:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:19.118 [2024-08-14 06:43:46.124777] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.118 BaseBdev2 00:12:19.118 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:19.118 06:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:19.118 06:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:19.118 06:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:19.118 06:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:19.118 06:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:19.118 06:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:19.118 06:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:19.377 [ 00:12:19.377 { 00:12:19.377 "name": "BaseBdev2", 00:12:19.377 "aliases": [ 00:12:19.377 "e72fbcbe-844a-46dd-a1cd-fe7ce4ead59d" 00:12:19.377 ], 00:12:19.377 "product_name": "Malloc disk", 00:12:19.377 "block_size": 512, 00:12:19.377 "num_blocks": 65536, 00:12:19.377 "uuid": "e72fbcbe-844a-46dd-a1cd-fe7ce4ead59d", 00:12:19.377 "assigned_rate_limits": { 00:12:19.377 "rw_ios_per_sec": 0, 00:12:19.377 "rw_mbytes_per_sec": 0, 00:12:19.377 "r_mbytes_per_sec": 0, 00:12:19.377 "w_mbytes_per_sec": 0 00:12:19.377 }, 00:12:19.377 "claimed": true, 00:12:19.377 "claim_type": "exclusive_write", 00:12:19.377 "zoned": false, 00:12:19.377 "supported_io_types": { 00:12:19.377 "read": true, 00:12:19.377 "write": true, 00:12:19.377 "unmap": true, 00:12:19.377 "flush": true, 00:12:19.377 "reset": true, 00:12:19.377 "nvme_admin": false, 00:12:19.378 "nvme_io": false, 00:12:19.378 "nvme_io_md": false, 00:12:19.378 "write_zeroes": true, 00:12:19.378 "zcopy": true, 00:12:19.378 "get_zone_info": false, 00:12:19.378 "zone_management": false, 00:12:19.378 "zone_append": false, 00:12:19.378 "compare": false, 00:12:19.378 "compare_and_write": false, 00:12:19.378 "abort": true, 00:12:19.378 "seek_hole": false, 00:12:19.378 "seek_data": false, 00:12:19.378 "copy": true, 00:12:19.378 "nvme_iov_md": false 00:12:19.378 }, 00:12:19.378 "memory_domains": [ 00:12:19.378 { 00:12:19.378 "dma_device_id": "system", 00:12:19.378 "dma_device_type": 1 00:12:19.378 }, 00:12:19.378 { 00:12:19.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.378 "dma_device_type": 2 00:12:19.378 } 00:12:19.378 ], 00:12:19.378 "driver_specific": {} 00:12:19.378 } 00:12:19.378 ] 00:12:19.378 06:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:19.378 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:19.378 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:19.378 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:19.378 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:19.378 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:19.378 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:19.378 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:19.378 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:19.378 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:19.378 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:19.378 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:19.378 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:19.378 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:19.378 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.638 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:19.638 "name": "Existed_Raid", 00:12:19.638 "uuid": "8f69c828-685b-4e04-b6af-7550406f27f1", 00:12:19.638 "strip_size_kb": 0, 00:12:19.638 "state": "configuring", 00:12:19.638 "raid_level": "raid1", 00:12:19.638 "superblock": true, 00:12:19.638 "num_base_bdevs": 3, 00:12:19.638 "num_base_bdevs_discovered": 2, 00:12:19.638 "num_base_bdevs_operational": 3, 00:12:19.638 "base_bdevs_list": [ 00:12:19.638 { 00:12:19.638 "name": "BaseBdev1", 00:12:19.638 "uuid": "c71b42f7-ec7b-4877-a200-65342713efb5", 00:12:19.638 "is_configured": true, 00:12:19.638 "data_offset": 2048, 00:12:19.638 "data_size": 63488 00:12:19.638 }, 00:12:19.638 { 00:12:19.638 "name": "BaseBdev2", 00:12:19.638 "uuid": "e72fbcbe-844a-46dd-a1cd-fe7ce4ead59d", 00:12:19.638 "is_configured": true, 00:12:19.638 "data_offset": 2048, 00:12:19.638 "data_size": 63488 00:12:19.638 }, 00:12:19.638 { 00:12:19.638 "name": "BaseBdev3", 00:12:19.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.638 "is_configured": false, 00:12:19.638 "data_offset": 0, 00:12:19.638 "data_size": 0 00:12:19.638 } 00:12:19.638 ] 00:12:19.638 }' 00:12:19.638 06:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:19.638 06:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.222 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:20.222 [2024-08-14 06:43:47.429965] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.222 [2024-08-14 06:43:47.430311] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:12:20.222 [2024-08-14 06:43:47.430368] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:20.222 [2024-08-14 06:43:47.430693] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:20.222 [2024-08-14 06:43:47.430876] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:12:20.222 [2024-08-14 06:43:47.430923] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:12:20.222 [2024-08-14 06:43:47.431104] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.222 BaseBdev3 00:12:20.222 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:20.222 06:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:12:20.222 06:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:20.222 06:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:20.222 06:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:20.222 06:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:20.222 06:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:20.481 06:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:20.741 [ 00:12:20.741 { 00:12:20.741 "name": "BaseBdev3", 00:12:20.741 "aliases": [ 00:12:20.741 "e6a163e5-59d3-469c-ad25-8724c947a52b" 00:12:20.741 ], 00:12:20.741 "product_name": "Malloc disk", 00:12:20.741 "block_size": 512, 00:12:20.741 "num_blocks": 65536, 00:12:20.741 "uuid": "e6a163e5-59d3-469c-ad25-8724c947a52b", 00:12:20.741 "assigned_rate_limits": { 00:12:20.741 "rw_ios_per_sec": 0, 00:12:20.741 "rw_mbytes_per_sec": 0, 00:12:20.741 "r_mbytes_per_sec": 0, 00:12:20.741 "w_mbytes_per_sec": 0 00:12:20.741 }, 00:12:20.741 "claimed": true, 00:12:20.741 "claim_type": "exclusive_write", 00:12:20.741 "zoned": false, 00:12:20.741 "supported_io_types": { 00:12:20.741 "read": true, 00:12:20.741 "write": true, 00:12:20.741 "unmap": true, 00:12:20.741 "flush": true, 00:12:20.741 "reset": true, 00:12:20.741 "nvme_admin": false, 00:12:20.741 "nvme_io": false, 00:12:20.741 "nvme_io_md": false, 00:12:20.741 "write_zeroes": true, 00:12:20.741 "zcopy": true, 00:12:20.741 "get_zone_info": false, 00:12:20.741 "zone_management": false, 00:12:20.741 "zone_append": false, 00:12:20.741 "compare": false, 00:12:20.741 "compare_and_write": false, 00:12:20.741 "abort": true, 00:12:20.741 "seek_hole": false, 00:12:20.741 "seek_data": false, 00:12:20.741 "copy": true, 00:12:20.741 "nvme_iov_md": false 00:12:20.741 }, 00:12:20.741 "memory_domains": [ 00:12:20.741 { 00:12:20.741 "dma_device_id": "system", 00:12:20.741 "dma_device_type": 1 00:12:20.741 }, 00:12:20.741 { 00:12:20.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.741 "dma_device_type": 2 00:12:20.741 } 00:12:20.741 ], 00:12:20.741 "driver_specific": {} 00:12:20.741 } 00:12:20.741 ] 00:12:20.741 06:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:20.741 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:20.741 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:20.741 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:20.741 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:20.741 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:20.741 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:20.741 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:20.741 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:20.741 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:20.741 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:20.741 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:20.741 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:20.741 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.741 06:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.002 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:21.002 "name": "Existed_Raid", 00:12:21.002 "uuid": "8f69c828-685b-4e04-b6af-7550406f27f1", 00:12:21.002 "strip_size_kb": 0, 00:12:21.002 "state": "online", 00:12:21.002 "raid_level": "raid1", 00:12:21.002 "superblock": true, 00:12:21.002 "num_base_bdevs": 3, 00:12:21.002 "num_base_bdevs_discovered": 3, 00:12:21.002 "num_base_bdevs_operational": 3, 00:12:21.002 "base_bdevs_list": [ 00:12:21.002 { 00:12:21.002 "name": "BaseBdev1", 00:12:21.002 "uuid": "c71b42f7-ec7b-4877-a200-65342713efb5", 00:12:21.002 "is_configured": true, 00:12:21.002 "data_offset": 2048, 00:12:21.002 "data_size": 63488 00:12:21.002 }, 00:12:21.002 { 00:12:21.002 "name": "BaseBdev2", 00:12:21.002 "uuid": "e72fbcbe-844a-46dd-a1cd-fe7ce4ead59d", 00:12:21.002 "is_configured": true, 00:12:21.002 "data_offset": 2048, 00:12:21.002 "data_size": 63488 00:12:21.002 }, 00:12:21.002 { 00:12:21.002 "name": "BaseBdev3", 00:12:21.002 "uuid": "e6a163e5-59d3-469c-ad25-8724c947a52b", 00:12:21.002 "is_configured": true, 00:12:21.002 "data_offset": 2048, 00:12:21.002 "data_size": 63488 00:12:21.002 } 00:12:21.002 ] 00:12:21.002 }' 00:12:21.002 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:21.002 06:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.575 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:21.575 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:21.575 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:21.575 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:21.575 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:21.575 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:21.575 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:21.575 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:21.575 [2024-08-14 06:43:48.796051] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.575 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:21.575 "name": "Existed_Raid", 00:12:21.575 "aliases": [ 00:12:21.575 "8f69c828-685b-4e04-b6af-7550406f27f1" 00:12:21.575 ], 00:12:21.575 "product_name": "Raid Volume", 00:12:21.575 "block_size": 512, 00:12:21.575 "num_blocks": 63488, 00:12:21.575 "uuid": "8f69c828-685b-4e04-b6af-7550406f27f1", 00:12:21.575 "assigned_rate_limits": { 00:12:21.575 "rw_ios_per_sec": 0, 00:12:21.575 "rw_mbytes_per_sec": 0, 00:12:21.575 "r_mbytes_per_sec": 0, 00:12:21.575 "w_mbytes_per_sec": 0 00:12:21.575 }, 00:12:21.575 "claimed": false, 00:12:21.575 "zoned": false, 00:12:21.575 "supported_io_types": { 00:12:21.575 "read": true, 00:12:21.575 "write": true, 00:12:21.575 "unmap": false, 00:12:21.575 "flush": false, 00:12:21.575 "reset": true, 00:12:21.575 "nvme_admin": false, 00:12:21.575 "nvme_io": false, 00:12:21.575 "nvme_io_md": false, 00:12:21.575 "write_zeroes": true, 00:12:21.575 "zcopy": false, 00:12:21.575 "get_zone_info": false, 00:12:21.575 "zone_management": false, 00:12:21.575 "zone_append": false, 00:12:21.575 "compare": false, 00:12:21.575 "compare_and_write": false, 00:12:21.575 "abort": false, 00:12:21.575 "seek_hole": false, 00:12:21.575 "seek_data": false, 00:12:21.575 "copy": false, 00:12:21.575 "nvme_iov_md": false 00:12:21.575 }, 00:12:21.575 "memory_domains": [ 00:12:21.575 { 00:12:21.575 "dma_device_id": "system", 00:12:21.575 "dma_device_type": 1 00:12:21.575 }, 00:12:21.575 { 00:12:21.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.575 "dma_device_type": 2 00:12:21.575 }, 00:12:21.575 { 00:12:21.575 "dma_device_id": "system", 00:12:21.575 "dma_device_type": 1 00:12:21.575 }, 00:12:21.575 { 00:12:21.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.575 "dma_device_type": 2 00:12:21.575 }, 00:12:21.575 { 00:12:21.575 "dma_device_id": "system", 00:12:21.575 "dma_device_type": 1 00:12:21.575 }, 00:12:21.575 { 00:12:21.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.575 "dma_device_type": 2 00:12:21.575 } 00:12:21.575 ], 00:12:21.575 "driver_specific": { 00:12:21.575 "raid": { 00:12:21.575 "uuid": "8f69c828-685b-4e04-b6af-7550406f27f1", 00:12:21.575 "strip_size_kb": 0, 00:12:21.575 "state": "online", 00:12:21.575 "raid_level": "raid1", 00:12:21.575 "superblock": true, 00:12:21.575 "num_base_bdevs": 3, 00:12:21.575 "num_base_bdevs_discovered": 3, 00:12:21.575 "num_base_bdevs_operational": 3, 00:12:21.575 "base_bdevs_list": [ 00:12:21.575 { 00:12:21.575 "name": "BaseBdev1", 00:12:21.575 "uuid": "c71b42f7-ec7b-4877-a200-65342713efb5", 00:12:21.575 "is_configured": true, 00:12:21.575 "data_offset": 2048, 00:12:21.575 "data_size": 63488 00:12:21.575 }, 00:12:21.575 { 00:12:21.575 "name": "BaseBdev2", 00:12:21.575 "uuid": "e72fbcbe-844a-46dd-a1cd-fe7ce4ead59d", 00:12:21.575 "is_configured": true, 00:12:21.575 "data_offset": 2048, 00:12:21.575 "data_size": 63488 00:12:21.575 }, 00:12:21.575 { 00:12:21.575 "name": "BaseBdev3", 00:12:21.575 "uuid": "e6a163e5-59d3-469c-ad25-8724c947a52b", 00:12:21.575 "is_configured": true, 00:12:21.575 "data_offset": 2048, 00:12:21.575 "data_size": 63488 00:12:21.575 } 00:12:21.575 ] 00:12:21.575 } 00:12:21.575 } 00:12:21.575 }' 00:12:21.575 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:21.836 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:21.836 BaseBdev2 00:12:21.836 BaseBdev3' 00:12:21.836 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:21.836 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:21.836 06:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:21.836 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:21.836 "name": "BaseBdev1", 00:12:21.836 "aliases": [ 00:12:21.836 "c71b42f7-ec7b-4877-a200-65342713efb5" 00:12:21.836 ], 00:12:21.836 "product_name": "Malloc disk", 00:12:21.836 "block_size": 512, 00:12:21.836 "num_blocks": 65536, 00:12:21.836 "uuid": "c71b42f7-ec7b-4877-a200-65342713efb5", 00:12:21.836 "assigned_rate_limits": { 00:12:21.836 "rw_ios_per_sec": 0, 00:12:21.836 "rw_mbytes_per_sec": 0, 00:12:21.836 "r_mbytes_per_sec": 0, 00:12:21.836 "w_mbytes_per_sec": 0 00:12:21.836 }, 00:12:21.836 "claimed": true, 00:12:21.836 "claim_type": "exclusive_write", 00:12:21.836 "zoned": false, 00:12:21.836 "supported_io_types": { 00:12:21.836 "read": true, 00:12:21.836 "write": true, 00:12:21.836 "unmap": true, 00:12:21.836 "flush": true, 00:12:21.836 "reset": true, 00:12:21.836 "nvme_admin": false, 00:12:21.836 "nvme_io": false, 00:12:21.836 "nvme_io_md": false, 00:12:21.836 "write_zeroes": true, 00:12:21.836 "zcopy": true, 00:12:21.836 "get_zone_info": false, 00:12:21.836 "zone_management": false, 00:12:21.836 "zone_append": false, 00:12:21.836 "compare": false, 00:12:21.836 "compare_and_write": false, 00:12:21.836 "abort": true, 00:12:21.836 "seek_hole": false, 00:12:21.836 "seek_data": false, 00:12:21.836 "copy": true, 00:12:21.836 "nvme_iov_md": false 00:12:21.836 }, 00:12:21.836 "memory_domains": [ 00:12:21.836 { 00:12:21.836 "dma_device_id": "system", 00:12:21.836 "dma_device_type": 1 00:12:21.836 }, 00:12:21.836 { 00:12:21.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.836 "dma_device_type": 2 00:12:21.836 } 00:12:21.836 ], 00:12:21.836 "driver_specific": {} 00:12:21.836 }' 00:12:21.836 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:22.096 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:22.096 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:22.096 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:22.096 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:22.096 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:22.096 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:22.096 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:22.096 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:22.096 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:22.356 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:22.356 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:22.356 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:22.356 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:22.356 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:22.356 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:22.356 "name": "BaseBdev2", 00:12:22.356 "aliases": [ 00:12:22.356 "e72fbcbe-844a-46dd-a1cd-fe7ce4ead59d" 00:12:22.356 ], 00:12:22.356 "product_name": "Malloc disk", 00:12:22.356 "block_size": 512, 00:12:22.356 "num_blocks": 65536, 00:12:22.356 "uuid": "e72fbcbe-844a-46dd-a1cd-fe7ce4ead59d", 00:12:22.356 "assigned_rate_limits": { 00:12:22.356 "rw_ios_per_sec": 0, 00:12:22.356 "rw_mbytes_per_sec": 0, 00:12:22.356 "r_mbytes_per_sec": 0, 00:12:22.356 "w_mbytes_per_sec": 0 00:12:22.356 }, 00:12:22.356 "claimed": true, 00:12:22.356 "claim_type": "exclusive_write", 00:12:22.356 "zoned": false, 00:12:22.356 "supported_io_types": { 00:12:22.356 "read": true, 00:12:22.356 "write": true, 00:12:22.356 "unmap": true, 00:12:22.356 "flush": true, 00:12:22.356 "reset": true, 00:12:22.356 "nvme_admin": false, 00:12:22.356 "nvme_io": false, 00:12:22.356 "nvme_io_md": false, 00:12:22.356 "write_zeroes": true, 00:12:22.356 "zcopy": true, 00:12:22.356 "get_zone_info": false, 00:12:22.356 "zone_management": false, 00:12:22.356 "zone_append": false, 00:12:22.356 "compare": false, 00:12:22.356 "compare_and_write": false, 00:12:22.356 "abort": true, 00:12:22.356 "seek_hole": false, 00:12:22.356 "seek_data": false, 00:12:22.356 "copy": true, 00:12:22.356 "nvme_iov_md": false 00:12:22.356 }, 00:12:22.356 "memory_domains": [ 00:12:22.356 { 00:12:22.356 "dma_device_id": "system", 00:12:22.356 "dma_device_type": 1 00:12:22.356 }, 00:12:22.356 { 00:12:22.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.356 "dma_device_type": 2 00:12:22.356 } 00:12:22.356 ], 00:12:22.356 "driver_specific": {} 00:12:22.356 }' 00:12:22.617 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:22.617 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:22.617 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:22.617 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:22.617 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:22.617 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:22.617 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:22.617 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:22.617 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:22.617 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:22.877 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:22.877 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:22.877 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:22.877 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:22.877 06:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:22.877 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:22.877 "name": "BaseBdev3", 00:12:22.877 "aliases": [ 00:12:22.877 "e6a163e5-59d3-469c-ad25-8724c947a52b" 00:12:22.877 ], 00:12:22.877 "product_name": "Malloc disk", 00:12:22.877 "block_size": 512, 00:12:22.877 "num_blocks": 65536, 00:12:22.877 "uuid": "e6a163e5-59d3-469c-ad25-8724c947a52b", 00:12:22.877 "assigned_rate_limits": { 00:12:22.877 "rw_ios_per_sec": 0, 00:12:22.877 "rw_mbytes_per_sec": 0, 00:12:22.877 "r_mbytes_per_sec": 0, 00:12:22.877 "w_mbytes_per_sec": 0 00:12:22.877 }, 00:12:22.877 "claimed": true, 00:12:22.877 "claim_type": "exclusive_write", 00:12:22.877 "zoned": false, 00:12:22.877 "supported_io_types": { 00:12:22.877 "read": true, 00:12:22.877 "write": true, 00:12:22.877 "unmap": true, 00:12:22.877 "flush": true, 00:12:22.877 "reset": true, 00:12:22.877 "nvme_admin": false, 00:12:22.877 "nvme_io": false, 00:12:22.877 "nvme_io_md": false, 00:12:22.877 "write_zeroes": true, 00:12:22.877 "zcopy": true, 00:12:22.877 "get_zone_info": false, 00:12:22.877 "zone_management": false, 00:12:22.877 "zone_append": false, 00:12:22.877 "compare": false, 00:12:22.877 "compare_and_write": false, 00:12:22.877 "abort": true, 00:12:22.877 "seek_hole": false, 00:12:22.877 "seek_data": false, 00:12:22.877 "copy": true, 00:12:22.877 "nvme_iov_md": false 00:12:22.877 }, 00:12:22.877 "memory_domains": [ 00:12:22.877 { 00:12:22.877 "dma_device_id": "system", 00:12:22.877 "dma_device_type": 1 00:12:22.877 }, 00:12:22.877 { 00:12:22.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.877 "dma_device_type": 2 00:12:22.877 } 00:12:22.877 ], 00:12:22.877 "driver_specific": {} 00:12:22.877 }' 00:12:22.877 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:23.137 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:23.137 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:23.137 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:23.137 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:23.137 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:23.137 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:23.137 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:23.137 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:23.137 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:23.397 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:23.397 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:23.397 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:23.397 [2024-08-14 06:43:50.636724] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:23.657 "name": "Existed_Raid", 00:12:23.657 "uuid": "8f69c828-685b-4e04-b6af-7550406f27f1", 00:12:23.657 "strip_size_kb": 0, 00:12:23.657 "state": "online", 00:12:23.657 "raid_level": "raid1", 00:12:23.657 "superblock": true, 00:12:23.657 "num_base_bdevs": 3, 00:12:23.657 "num_base_bdevs_discovered": 2, 00:12:23.657 "num_base_bdevs_operational": 2, 00:12:23.657 "base_bdevs_list": [ 00:12:23.657 { 00:12:23.657 "name": null, 00:12:23.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.657 "is_configured": false, 00:12:23.657 "data_offset": 2048, 00:12:23.657 "data_size": 63488 00:12:23.657 }, 00:12:23.657 { 00:12:23.657 "name": "BaseBdev2", 00:12:23.657 "uuid": "e72fbcbe-844a-46dd-a1cd-fe7ce4ead59d", 00:12:23.657 "is_configured": true, 00:12:23.657 "data_offset": 2048, 00:12:23.657 "data_size": 63488 00:12:23.657 }, 00:12:23.657 { 00:12:23.657 "name": "BaseBdev3", 00:12:23.657 "uuid": "e6a163e5-59d3-469c-ad25-8724c947a52b", 00:12:23.657 "is_configured": true, 00:12:23.657 "data_offset": 2048, 00:12:23.657 "data_size": 63488 00:12:23.657 } 00:12:23.657 ] 00:12:23.657 }' 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:23.657 06:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.226 06:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:24.227 06:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:24.227 06:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.227 06:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:24.486 06:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:24.486 06:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:24.486 06:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:24.745 [2024-08-14 06:43:51.834674] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:24.745 06:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:24.745 06:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:24.745 06:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.745 06:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:25.004 06:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:25.004 06:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:25.004 06:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:25.264 [2024-08-14 06:43:52.279265] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:25.264 [2024-08-14 06:43:52.279416] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.264 [2024-08-14 06:43:52.300647] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.264 [2024-08-14 06:43:52.300820] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.264 [2024-08-14 06:43:52.300839] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:12:25.264 06:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:25.264 06:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:25.264 06:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:25.264 06:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.264 06:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:25.264 06:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:25.264 06:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:25.264 06:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:25.264 06:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:25.264 06:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:25.524 BaseBdev2 00:12:25.524 06:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:25.524 06:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:25.524 06:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:25.524 06:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:25.524 06:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:25.524 06:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:25.524 06:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:25.783 06:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:26.042 [ 00:12:26.042 { 00:12:26.042 "name": "BaseBdev2", 00:12:26.042 "aliases": [ 00:12:26.042 "f4a5ad37-95ce-4bcf-a57c-c21d645236f8" 00:12:26.042 ], 00:12:26.042 "product_name": "Malloc disk", 00:12:26.042 "block_size": 512, 00:12:26.042 "num_blocks": 65536, 00:12:26.042 "uuid": "f4a5ad37-95ce-4bcf-a57c-c21d645236f8", 00:12:26.042 "assigned_rate_limits": { 00:12:26.042 "rw_ios_per_sec": 0, 00:12:26.043 "rw_mbytes_per_sec": 0, 00:12:26.043 "r_mbytes_per_sec": 0, 00:12:26.043 "w_mbytes_per_sec": 0 00:12:26.043 }, 00:12:26.043 "claimed": false, 00:12:26.043 "zoned": false, 00:12:26.043 "supported_io_types": { 00:12:26.043 "read": true, 00:12:26.043 "write": true, 00:12:26.043 "unmap": true, 00:12:26.043 "flush": true, 00:12:26.043 "reset": true, 00:12:26.043 "nvme_admin": false, 00:12:26.043 "nvme_io": false, 00:12:26.043 "nvme_io_md": false, 00:12:26.043 "write_zeroes": true, 00:12:26.043 "zcopy": true, 00:12:26.043 "get_zone_info": false, 00:12:26.043 "zone_management": false, 00:12:26.043 "zone_append": false, 00:12:26.043 "compare": false, 00:12:26.043 "compare_and_write": false, 00:12:26.043 "abort": true, 00:12:26.043 "seek_hole": false, 00:12:26.043 "seek_data": false, 00:12:26.043 "copy": true, 00:12:26.043 "nvme_iov_md": false 00:12:26.043 }, 00:12:26.043 "memory_domains": [ 00:12:26.043 { 00:12:26.043 "dma_device_id": "system", 00:12:26.043 "dma_device_type": 1 00:12:26.043 }, 00:12:26.043 { 00:12:26.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.043 "dma_device_type": 2 00:12:26.043 } 00:12:26.043 ], 00:12:26.043 "driver_specific": {} 00:12:26.043 } 00:12:26.043 ] 00:12:26.043 06:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:26.043 06:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:26.043 06:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:26.043 06:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:26.302 BaseBdev3 00:12:26.302 06:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:26.302 06:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:12:26.302 06:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:26.302 06:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:26.302 06:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:26.302 06:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:26.302 06:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:26.563 06:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:26.563 [ 00:12:26.563 { 00:12:26.563 "name": "BaseBdev3", 00:12:26.563 "aliases": [ 00:12:26.563 "cd208054-3dbf-47b6-8a61-60fdfd76e309" 00:12:26.563 ], 00:12:26.563 "product_name": "Malloc disk", 00:12:26.563 "block_size": 512, 00:12:26.563 "num_blocks": 65536, 00:12:26.563 "uuid": "cd208054-3dbf-47b6-8a61-60fdfd76e309", 00:12:26.563 "assigned_rate_limits": { 00:12:26.563 "rw_ios_per_sec": 0, 00:12:26.563 "rw_mbytes_per_sec": 0, 00:12:26.563 "r_mbytes_per_sec": 0, 00:12:26.563 "w_mbytes_per_sec": 0 00:12:26.563 }, 00:12:26.563 "claimed": false, 00:12:26.563 "zoned": false, 00:12:26.563 "supported_io_types": { 00:12:26.563 "read": true, 00:12:26.563 "write": true, 00:12:26.563 "unmap": true, 00:12:26.563 "flush": true, 00:12:26.563 "reset": true, 00:12:26.563 "nvme_admin": false, 00:12:26.563 "nvme_io": false, 00:12:26.563 "nvme_io_md": false, 00:12:26.563 "write_zeroes": true, 00:12:26.563 "zcopy": true, 00:12:26.563 "get_zone_info": false, 00:12:26.563 "zone_management": false, 00:12:26.563 "zone_append": false, 00:12:26.563 "compare": false, 00:12:26.563 "compare_and_write": false, 00:12:26.563 "abort": true, 00:12:26.563 "seek_hole": false, 00:12:26.563 "seek_data": false, 00:12:26.563 "copy": true, 00:12:26.563 "nvme_iov_md": false 00:12:26.563 }, 00:12:26.563 "memory_domains": [ 00:12:26.563 { 00:12:26.563 "dma_device_id": "system", 00:12:26.563 "dma_device_type": 1 00:12:26.563 }, 00:12:26.563 { 00:12:26.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.563 "dma_device_type": 2 00:12:26.563 } 00:12:26.563 ], 00:12:26.563 "driver_specific": {} 00:12:26.563 } 00:12:26.563 ] 00:12:26.563 06:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:26.563 06:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:26.563 06:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:26.563 06:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:26.823 [2024-08-14 06:43:54.008958] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:26.823 [2024-08-14 06:43:54.009047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:26.823 [2024-08-14 06:43:54.009086] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.823 [2024-08-14 06:43:54.011485] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.823 06:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:26.823 06:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:26.823 06:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:26.823 06:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:26.823 06:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:26.824 06:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:26.824 06:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:26.824 06:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:26.824 06:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:26.824 06:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:26.824 06:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.824 06:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.086 06:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:27.086 "name": "Existed_Raid", 00:12:27.086 "uuid": "d3d41148-eb87-4b6b-85cb-d1694877100f", 00:12:27.086 "strip_size_kb": 0, 00:12:27.086 "state": "configuring", 00:12:27.086 "raid_level": "raid1", 00:12:27.086 "superblock": true, 00:12:27.086 "num_base_bdevs": 3, 00:12:27.086 "num_base_bdevs_discovered": 2, 00:12:27.086 "num_base_bdevs_operational": 3, 00:12:27.086 "base_bdevs_list": [ 00:12:27.086 { 00:12:27.086 "name": "BaseBdev1", 00:12:27.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.086 "is_configured": false, 00:12:27.086 "data_offset": 0, 00:12:27.086 "data_size": 0 00:12:27.086 }, 00:12:27.086 { 00:12:27.086 "name": "BaseBdev2", 00:12:27.086 "uuid": "f4a5ad37-95ce-4bcf-a57c-c21d645236f8", 00:12:27.086 "is_configured": true, 00:12:27.086 "data_offset": 2048, 00:12:27.086 "data_size": 63488 00:12:27.086 }, 00:12:27.086 { 00:12:27.086 "name": "BaseBdev3", 00:12:27.086 "uuid": "cd208054-3dbf-47b6-8a61-60fdfd76e309", 00:12:27.086 "is_configured": true, 00:12:27.086 "data_offset": 2048, 00:12:27.086 "data_size": 63488 00:12:27.086 } 00:12:27.086 ] 00:12:27.086 }' 00:12:27.086 06:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:27.086 06:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.656 06:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:27.916 [2024-08-14 06:43:55.011253] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:27.916 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:27.916 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:27.916 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:27.916 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:27.916 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:27.916 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:27.916 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:27.916 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:27.916 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:27.916 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:27.916 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.916 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.177 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:28.177 "name": "Existed_Raid", 00:12:28.177 "uuid": "d3d41148-eb87-4b6b-85cb-d1694877100f", 00:12:28.177 "strip_size_kb": 0, 00:12:28.177 "state": "configuring", 00:12:28.177 "raid_level": "raid1", 00:12:28.177 "superblock": true, 00:12:28.177 "num_base_bdevs": 3, 00:12:28.177 "num_base_bdevs_discovered": 1, 00:12:28.177 "num_base_bdevs_operational": 3, 00:12:28.177 "base_bdevs_list": [ 00:12:28.177 { 00:12:28.177 "name": "BaseBdev1", 00:12:28.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.177 "is_configured": false, 00:12:28.177 "data_offset": 0, 00:12:28.177 "data_size": 0 00:12:28.177 }, 00:12:28.177 { 00:12:28.177 "name": null, 00:12:28.177 "uuid": "f4a5ad37-95ce-4bcf-a57c-c21d645236f8", 00:12:28.177 "is_configured": false, 00:12:28.177 "data_offset": 2048, 00:12:28.177 "data_size": 63488 00:12:28.177 }, 00:12:28.177 { 00:12:28.177 "name": "BaseBdev3", 00:12:28.177 "uuid": "cd208054-3dbf-47b6-8a61-60fdfd76e309", 00:12:28.177 "is_configured": true, 00:12:28.177 "data_offset": 2048, 00:12:28.177 "data_size": 63488 00:12:28.177 } 00:12:28.177 ] 00:12:28.177 }' 00:12:28.177 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:28.177 06:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.747 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.747 06:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:29.007 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:29.007 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:29.007 [2024-08-14 06:43:56.250900] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.007 BaseBdev1 00:12:29.266 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:29.266 06:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:29.266 06:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:29.266 06:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:29.266 06:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:29.266 06:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:29.266 06:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:29.266 06:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:29.526 [ 00:12:29.526 { 00:12:29.526 "name": "BaseBdev1", 00:12:29.526 "aliases": [ 00:12:29.526 "b3d3db7e-3538-4640-920d-9d905f62f672" 00:12:29.526 ], 00:12:29.526 "product_name": "Malloc disk", 00:12:29.526 "block_size": 512, 00:12:29.526 "num_blocks": 65536, 00:12:29.526 "uuid": "b3d3db7e-3538-4640-920d-9d905f62f672", 00:12:29.526 "assigned_rate_limits": { 00:12:29.526 "rw_ios_per_sec": 0, 00:12:29.526 "rw_mbytes_per_sec": 0, 00:12:29.526 "r_mbytes_per_sec": 0, 00:12:29.526 "w_mbytes_per_sec": 0 00:12:29.526 }, 00:12:29.526 "claimed": true, 00:12:29.526 "claim_type": "exclusive_write", 00:12:29.526 "zoned": false, 00:12:29.526 "supported_io_types": { 00:12:29.526 "read": true, 00:12:29.526 "write": true, 00:12:29.526 "unmap": true, 00:12:29.526 "flush": true, 00:12:29.526 "reset": true, 00:12:29.526 "nvme_admin": false, 00:12:29.526 "nvme_io": false, 00:12:29.526 "nvme_io_md": false, 00:12:29.526 "write_zeroes": true, 00:12:29.526 "zcopy": true, 00:12:29.526 "get_zone_info": false, 00:12:29.526 "zone_management": false, 00:12:29.526 "zone_append": false, 00:12:29.526 "compare": false, 00:12:29.526 "compare_and_write": false, 00:12:29.526 "abort": true, 00:12:29.526 "seek_hole": false, 00:12:29.526 "seek_data": false, 00:12:29.526 "copy": true, 00:12:29.526 "nvme_iov_md": false 00:12:29.526 }, 00:12:29.526 "memory_domains": [ 00:12:29.526 { 00:12:29.526 "dma_device_id": "system", 00:12:29.526 "dma_device_type": 1 00:12:29.526 }, 00:12:29.526 { 00:12:29.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.526 "dma_device_type": 2 00:12:29.526 } 00:12:29.526 ], 00:12:29.526 "driver_specific": {} 00:12:29.526 } 00:12:29.526 ] 00:12:29.526 06:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:29.526 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:29.526 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:29.526 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:29.526 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:29.526 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:29.526 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:29.526 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:29.526 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:29.526 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:29.526 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:29.526 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.526 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.786 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:29.786 "name": "Existed_Raid", 00:12:29.786 "uuid": "d3d41148-eb87-4b6b-85cb-d1694877100f", 00:12:29.786 "strip_size_kb": 0, 00:12:29.786 "state": "configuring", 00:12:29.786 "raid_level": "raid1", 00:12:29.786 "superblock": true, 00:12:29.786 "num_base_bdevs": 3, 00:12:29.786 "num_base_bdevs_discovered": 2, 00:12:29.786 "num_base_bdevs_operational": 3, 00:12:29.786 "base_bdevs_list": [ 00:12:29.786 { 00:12:29.786 "name": "BaseBdev1", 00:12:29.786 "uuid": "b3d3db7e-3538-4640-920d-9d905f62f672", 00:12:29.786 "is_configured": true, 00:12:29.786 "data_offset": 2048, 00:12:29.786 "data_size": 63488 00:12:29.786 }, 00:12:29.786 { 00:12:29.786 "name": null, 00:12:29.786 "uuid": "f4a5ad37-95ce-4bcf-a57c-c21d645236f8", 00:12:29.786 "is_configured": false, 00:12:29.786 "data_offset": 2048, 00:12:29.786 "data_size": 63488 00:12:29.786 }, 00:12:29.786 { 00:12:29.786 "name": "BaseBdev3", 00:12:29.786 "uuid": "cd208054-3dbf-47b6-8a61-60fdfd76e309", 00:12:29.786 "is_configured": true, 00:12:29.786 "data_offset": 2048, 00:12:29.786 "data_size": 63488 00:12:29.786 } 00:12:29.786 ] 00:12:29.786 }' 00:12:29.786 06:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:29.786 06:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.356 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:30.356 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.616 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:30.616 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:30.876 [2024-08-14 06:43:57.936347] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:30.876 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:30.876 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:30.876 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:30.876 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:30.876 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:30.876 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:30.876 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:30.876 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:30.876 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:30.876 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:30.876 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.876 06:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.136 06:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:31.136 "name": "Existed_Raid", 00:12:31.136 "uuid": "d3d41148-eb87-4b6b-85cb-d1694877100f", 00:12:31.136 "strip_size_kb": 0, 00:12:31.136 "state": "configuring", 00:12:31.136 "raid_level": "raid1", 00:12:31.136 "superblock": true, 00:12:31.136 "num_base_bdevs": 3, 00:12:31.136 "num_base_bdevs_discovered": 1, 00:12:31.136 "num_base_bdevs_operational": 3, 00:12:31.136 "base_bdevs_list": [ 00:12:31.136 { 00:12:31.136 "name": "BaseBdev1", 00:12:31.136 "uuid": "b3d3db7e-3538-4640-920d-9d905f62f672", 00:12:31.136 "is_configured": true, 00:12:31.136 "data_offset": 2048, 00:12:31.136 "data_size": 63488 00:12:31.136 }, 00:12:31.136 { 00:12:31.136 "name": null, 00:12:31.136 "uuid": "f4a5ad37-95ce-4bcf-a57c-c21d645236f8", 00:12:31.136 "is_configured": false, 00:12:31.136 "data_offset": 2048, 00:12:31.136 "data_size": 63488 00:12:31.136 }, 00:12:31.136 { 00:12:31.136 "name": null, 00:12:31.136 "uuid": "cd208054-3dbf-47b6-8a61-60fdfd76e309", 00:12:31.136 "is_configured": false, 00:12:31.136 "data_offset": 2048, 00:12:31.136 "data_size": 63488 00:12:31.136 } 00:12:31.136 ] 00:12:31.136 }' 00:12:31.136 06:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:31.136 06:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.705 06:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.705 06:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:31.705 06:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:31.705 06:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:31.965 [2024-08-14 06:43:59.122342] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:31.965 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:31.965 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:31.965 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:31.965 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:31.965 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:31.965 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:31.965 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:31.965 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:31.965 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:31.965 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:31.965 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.965 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.225 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:32.226 "name": "Existed_Raid", 00:12:32.226 "uuid": "d3d41148-eb87-4b6b-85cb-d1694877100f", 00:12:32.226 "strip_size_kb": 0, 00:12:32.226 "state": "configuring", 00:12:32.226 "raid_level": "raid1", 00:12:32.226 "superblock": true, 00:12:32.226 "num_base_bdevs": 3, 00:12:32.226 "num_base_bdevs_discovered": 2, 00:12:32.226 "num_base_bdevs_operational": 3, 00:12:32.226 "base_bdevs_list": [ 00:12:32.226 { 00:12:32.226 "name": "BaseBdev1", 00:12:32.226 "uuid": "b3d3db7e-3538-4640-920d-9d905f62f672", 00:12:32.226 "is_configured": true, 00:12:32.226 "data_offset": 2048, 00:12:32.226 "data_size": 63488 00:12:32.226 }, 00:12:32.226 { 00:12:32.226 "name": null, 00:12:32.226 "uuid": "f4a5ad37-95ce-4bcf-a57c-c21d645236f8", 00:12:32.226 "is_configured": false, 00:12:32.226 "data_offset": 2048, 00:12:32.226 "data_size": 63488 00:12:32.226 }, 00:12:32.226 { 00:12:32.226 "name": "BaseBdev3", 00:12:32.226 "uuid": "cd208054-3dbf-47b6-8a61-60fdfd76e309", 00:12:32.226 "is_configured": true, 00:12:32.226 "data_offset": 2048, 00:12:32.226 "data_size": 63488 00:12:32.226 } 00:12:32.226 ] 00:12:32.226 }' 00:12:32.226 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:32.226 06:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.795 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:32.795 06:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:33.054 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:33.054 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:33.315 [2024-08-14 06:44:00.356330] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:33.315 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:33.315 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:33.315 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:33.315 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:33.315 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:33.315 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:33.315 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:33.315 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:33.315 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:33.315 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:33.315 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.315 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.575 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:33.575 "name": "Existed_Raid", 00:12:33.575 "uuid": "d3d41148-eb87-4b6b-85cb-d1694877100f", 00:12:33.575 "strip_size_kb": 0, 00:12:33.575 "state": "configuring", 00:12:33.575 "raid_level": "raid1", 00:12:33.575 "superblock": true, 00:12:33.575 "num_base_bdevs": 3, 00:12:33.575 "num_base_bdevs_discovered": 1, 00:12:33.575 "num_base_bdevs_operational": 3, 00:12:33.575 "base_bdevs_list": [ 00:12:33.575 { 00:12:33.575 "name": null, 00:12:33.575 "uuid": "b3d3db7e-3538-4640-920d-9d905f62f672", 00:12:33.575 "is_configured": false, 00:12:33.575 "data_offset": 2048, 00:12:33.575 "data_size": 63488 00:12:33.575 }, 00:12:33.575 { 00:12:33.575 "name": null, 00:12:33.575 "uuid": "f4a5ad37-95ce-4bcf-a57c-c21d645236f8", 00:12:33.575 "is_configured": false, 00:12:33.575 "data_offset": 2048, 00:12:33.575 "data_size": 63488 00:12:33.575 }, 00:12:33.575 { 00:12:33.575 "name": "BaseBdev3", 00:12:33.575 "uuid": "cd208054-3dbf-47b6-8a61-60fdfd76e309", 00:12:33.575 "is_configured": true, 00:12:33.575 "data_offset": 2048, 00:12:33.575 "data_size": 63488 00:12:33.575 } 00:12:33.575 ] 00:12:33.575 }' 00:12:33.575 06:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:33.575 06:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.143 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.143 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:34.143 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:34.143 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:34.402 [2024-08-14 06:44:01.584985] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.402 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:34.402 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:34.402 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:34.402 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:34.402 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:34.402 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:34.402 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:34.402 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:34.402 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:34.402 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:34.402 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.402 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.661 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:34.661 "name": "Existed_Raid", 00:12:34.661 "uuid": "d3d41148-eb87-4b6b-85cb-d1694877100f", 00:12:34.661 "strip_size_kb": 0, 00:12:34.661 "state": "configuring", 00:12:34.661 "raid_level": "raid1", 00:12:34.661 "superblock": true, 00:12:34.661 "num_base_bdevs": 3, 00:12:34.661 "num_base_bdevs_discovered": 2, 00:12:34.661 "num_base_bdevs_operational": 3, 00:12:34.661 "base_bdevs_list": [ 00:12:34.661 { 00:12:34.661 "name": null, 00:12:34.661 "uuid": "b3d3db7e-3538-4640-920d-9d905f62f672", 00:12:34.661 "is_configured": false, 00:12:34.661 "data_offset": 2048, 00:12:34.661 "data_size": 63488 00:12:34.661 }, 00:12:34.661 { 00:12:34.661 "name": "BaseBdev2", 00:12:34.661 "uuid": "f4a5ad37-95ce-4bcf-a57c-c21d645236f8", 00:12:34.661 "is_configured": true, 00:12:34.661 "data_offset": 2048, 00:12:34.661 "data_size": 63488 00:12:34.661 }, 00:12:34.661 { 00:12:34.661 "name": "BaseBdev3", 00:12:34.661 "uuid": "cd208054-3dbf-47b6-8a61-60fdfd76e309", 00:12:34.661 "is_configured": true, 00:12:34.661 "data_offset": 2048, 00:12:34.661 "data_size": 63488 00:12:34.661 } 00:12:34.661 ] 00:12:34.661 }' 00:12:34.661 06:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:34.661 06:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.227 06:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.227 06:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:35.486 06:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:35.486 06:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:35.486 06:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.745 06:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b3d3db7e-3538-4640-920d-9d905f62f672 00:12:36.004 [2024-08-14 06:44:03.025617] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:36.004 [2024-08-14 06:44:03.025829] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:12:36.004 [2024-08-14 06:44:03.025847] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:36.004 [2024-08-14 06:44:03.026118] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:36.004 [2024-08-14 06:44:03.026277] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:12:36.004 [2024-08-14 06:44:03.026292] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:12:36.004 [2024-08-14 06:44:03.026410] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.004 NewBaseBdev 00:12:36.004 06:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:36.004 06:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:12:36.004 06:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:36.004 06:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:36.004 06:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:36.004 06:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:36.004 06:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:36.004 06:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:36.264 [ 00:12:36.264 { 00:12:36.264 "name": "NewBaseBdev", 00:12:36.264 "aliases": [ 00:12:36.264 "b3d3db7e-3538-4640-920d-9d905f62f672" 00:12:36.264 ], 00:12:36.264 "product_name": "Malloc disk", 00:12:36.264 "block_size": 512, 00:12:36.264 "num_blocks": 65536, 00:12:36.264 "uuid": "b3d3db7e-3538-4640-920d-9d905f62f672", 00:12:36.264 "assigned_rate_limits": { 00:12:36.264 "rw_ios_per_sec": 0, 00:12:36.264 "rw_mbytes_per_sec": 0, 00:12:36.264 "r_mbytes_per_sec": 0, 00:12:36.264 "w_mbytes_per_sec": 0 00:12:36.264 }, 00:12:36.264 "claimed": true, 00:12:36.264 "claim_type": "exclusive_write", 00:12:36.264 "zoned": false, 00:12:36.264 "supported_io_types": { 00:12:36.264 "read": true, 00:12:36.264 "write": true, 00:12:36.264 "unmap": true, 00:12:36.264 "flush": true, 00:12:36.264 "reset": true, 00:12:36.264 "nvme_admin": false, 00:12:36.264 "nvme_io": false, 00:12:36.264 "nvme_io_md": false, 00:12:36.264 "write_zeroes": true, 00:12:36.264 "zcopy": true, 00:12:36.264 "get_zone_info": false, 00:12:36.264 "zone_management": false, 00:12:36.264 "zone_append": false, 00:12:36.264 "compare": false, 00:12:36.264 "compare_and_write": false, 00:12:36.264 "abort": true, 00:12:36.264 "seek_hole": false, 00:12:36.264 "seek_data": false, 00:12:36.264 "copy": true, 00:12:36.264 "nvme_iov_md": false 00:12:36.264 }, 00:12:36.264 "memory_domains": [ 00:12:36.264 { 00:12:36.264 "dma_device_id": "system", 00:12:36.264 "dma_device_type": 1 00:12:36.264 }, 00:12:36.264 { 00:12:36.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.264 "dma_device_type": 2 00:12:36.264 } 00:12:36.264 ], 00:12:36.264 "driver_specific": {} 00:12:36.264 } 00:12:36.264 ] 00:12:36.264 06:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:36.264 06:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:36.264 06:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:36.264 06:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:36.264 06:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:36.264 06:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:36.264 06:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:36.264 06:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:36.264 06:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:36.264 06:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:36.264 06:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:36.264 06:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.264 06:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:36.523 06:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:36.523 "name": "Existed_Raid", 00:12:36.523 "uuid": "d3d41148-eb87-4b6b-85cb-d1694877100f", 00:12:36.523 "strip_size_kb": 0, 00:12:36.523 "state": "online", 00:12:36.523 "raid_level": "raid1", 00:12:36.523 "superblock": true, 00:12:36.523 "num_base_bdevs": 3, 00:12:36.523 "num_base_bdevs_discovered": 3, 00:12:36.523 "num_base_bdevs_operational": 3, 00:12:36.523 "base_bdevs_list": [ 00:12:36.523 { 00:12:36.523 "name": "NewBaseBdev", 00:12:36.523 "uuid": "b3d3db7e-3538-4640-920d-9d905f62f672", 00:12:36.523 "is_configured": true, 00:12:36.523 "data_offset": 2048, 00:12:36.523 "data_size": 63488 00:12:36.523 }, 00:12:36.523 { 00:12:36.523 "name": "BaseBdev2", 00:12:36.523 "uuid": "f4a5ad37-95ce-4bcf-a57c-c21d645236f8", 00:12:36.523 "is_configured": true, 00:12:36.523 "data_offset": 2048, 00:12:36.523 "data_size": 63488 00:12:36.523 }, 00:12:36.523 { 00:12:36.523 "name": "BaseBdev3", 00:12:36.523 "uuid": "cd208054-3dbf-47b6-8a61-60fdfd76e309", 00:12:36.523 "is_configured": true, 00:12:36.523 "data_offset": 2048, 00:12:36.523 "data_size": 63488 00:12:36.523 } 00:12:36.523 ] 00:12:36.523 }' 00:12:36.523 06:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:36.523 06:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.091 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:37.091 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:37.091 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:37.091 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:37.091 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:37.091 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:37.091 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:37.091 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:37.350 [2024-08-14 06:44:04.443619] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.350 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:37.350 "name": "Existed_Raid", 00:12:37.350 "aliases": [ 00:12:37.350 "d3d41148-eb87-4b6b-85cb-d1694877100f" 00:12:37.350 ], 00:12:37.350 "product_name": "Raid Volume", 00:12:37.350 "block_size": 512, 00:12:37.350 "num_blocks": 63488, 00:12:37.350 "uuid": "d3d41148-eb87-4b6b-85cb-d1694877100f", 00:12:37.350 "assigned_rate_limits": { 00:12:37.350 "rw_ios_per_sec": 0, 00:12:37.350 "rw_mbytes_per_sec": 0, 00:12:37.350 "r_mbytes_per_sec": 0, 00:12:37.350 "w_mbytes_per_sec": 0 00:12:37.350 }, 00:12:37.350 "claimed": false, 00:12:37.350 "zoned": false, 00:12:37.350 "supported_io_types": { 00:12:37.350 "read": true, 00:12:37.350 "write": true, 00:12:37.350 "unmap": false, 00:12:37.350 "flush": false, 00:12:37.350 "reset": true, 00:12:37.350 "nvme_admin": false, 00:12:37.350 "nvme_io": false, 00:12:37.350 "nvme_io_md": false, 00:12:37.350 "write_zeroes": true, 00:12:37.350 "zcopy": false, 00:12:37.350 "get_zone_info": false, 00:12:37.350 "zone_management": false, 00:12:37.350 "zone_append": false, 00:12:37.350 "compare": false, 00:12:37.350 "compare_and_write": false, 00:12:37.350 "abort": false, 00:12:37.350 "seek_hole": false, 00:12:37.350 "seek_data": false, 00:12:37.350 "copy": false, 00:12:37.350 "nvme_iov_md": false 00:12:37.350 }, 00:12:37.350 "memory_domains": [ 00:12:37.350 { 00:12:37.350 "dma_device_id": "system", 00:12:37.350 "dma_device_type": 1 00:12:37.350 }, 00:12:37.350 { 00:12:37.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.350 "dma_device_type": 2 00:12:37.350 }, 00:12:37.350 { 00:12:37.350 "dma_device_id": "system", 00:12:37.350 "dma_device_type": 1 00:12:37.350 }, 00:12:37.350 { 00:12:37.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.350 "dma_device_type": 2 00:12:37.350 }, 00:12:37.350 { 00:12:37.350 "dma_device_id": "system", 00:12:37.350 "dma_device_type": 1 00:12:37.350 }, 00:12:37.350 { 00:12:37.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.350 "dma_device_type": 2 00:12:37.350 } 00:12:37.350 ], 00:12:37.350 "driver_specific": { 00:12:37.350 "raid": { 00:12:37.350 "uuid": "d3d41148-eb87-4b6b-85cb-d1694877100f", 00:12:37.350 "strip_size_kb": 0, 00:12:37.350 "state": "online", 00:12:37.350 "raid_level": "raid1", 00:12:37.350 "superblock": true, 00:12:37.350 "num_base_bdevs": 3, 00:12:37.350 "num_base_bdevs_discovered": 3, 00:12:37.350 "num_base_bdevs_operational": 3, 00:12:37.350 "base_bdevs_list": [ 00:12:37.350 { 00:12:37.350 "name": "NewBaseBdev", 00:12:37.350 "uuid": "b3d3db7e-3538-4640-920d-9d905f62f672", 00:12:37.350 "is_configured": true, 00:12:37.350 "data_offset": 2048, 00:12:37.350 "data_size": 63488 00:12:37.350 }, 00:12:37.350 { 00:12:37.350 "name": "BaseBdev2", 00:12:37.350 "uuid": "f4a5ad37-95ce-4bcf-a57c-c21d645236f8", 00:12:37.350 "is_configured": true, 00:12:37.350 "data_offset": 2048, 00:12:37.350 "data_size": 63488 00:12:37.350 }, 00:12:37.350 { 00:12:37.350 "name": "BaseBdev3", 00:12:37.350 "uuid": "cd208054-3dbf-47b6-8a61-60fdfd76e309", 00:12:37.350 "is_configured": true, 00:12:37.350 "data_offset": 2048, 00:12:37.350 "data_size": 63488 00:12:37.350 } 00:12:37.350 ] 00:12:37.350 } 00:12:37.350 } 00:12:37.350 }' 00:12:37.350 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:37.350 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:37.350 BaseBdev2 00:12:37.350 BaseBdev3' 00:12:37.350 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:37.350 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:37.350 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:37.609 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:37.609 "name": "NewBaseBdev", 00:12:37.609 "aliases": [ 00:12:37.609 "b3d3db7e-3538-4640-920d-9d905f62f672" 00:12:37.609 ], 00:12:37.609 "product_name": "Malloc disk", 00:12:37.609 "block_size": 512, 00:12:37.609 "num_blocks": 65536, 00:12:37.609 "uuid": "b3d3db7e-3538-4640-920d-9d905f62f672", 00:12:37.609 "assigned_rate_limits": { 00:12:37.609 "rw_ios_per_sec": 0, 00:12:37.609 "rw_mbytes_per_sec": 0, 00:12:37.609 "r_mbytes_per_sec": 0, 00:12:37.609 "w_mbytes_per_sec": 0 00:12:37.609 }, 00:12:37.609 "claimed": true, 00:12:37.609 "claim_type": "exclusive_write", 00:12:37.609 "zoned": false, 00:12:37.609 "supported_io_types": { 00:12:37.609 "read": true, 00:12:37.609 "write": true, 00:12:37.609 "unmap": true, 00:12:37.609 "flush": true, 00:12:37.609 "reset": true, 00:12:37.609 "nvme_admin": false, 00:12:37.609 "nvme_io": false, 00:12:37.609 "nvme_io_md": false, 00:12:37.609 "write_zeroes": true, 00:12:37.609 "zcopy": true, 00:12:37.609 "get_zone_info": false, 00:12:37.609 "zone_management": false, 00:12:37.609 "zone_append": false, 00:12:37.609 "compare": false, 00:12:37.609 "compare_and_write": false, 00:12:37.609 "abort": true, 00:12:37.609 "seek_hole": false, 00:12:37.609 "seek_data": false, 00:12:37.609 "copy": true, 00:12:37.609 "nvme_iov_md": false 00:12:37.609 }, 00:12:37.609 "memory_domains": [ 00:12:37.609 { 00:12:37.609 "dma_device_id": "system", 00:12:37.609 "dma_device_type": 1 00:12:37.609 }, 00:12:37.609 { 00:12:37.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.609 "dma_device_type": 2 00:12:37.609 } 00:12:37.609 ], 00:12:37.609 "driver_specific": {} 00:12:37.609 }' 00:12:37.609 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.609 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.609 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:37.609 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.609 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.609 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:37.609 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.868 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.868 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:37.868 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.868 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.868 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:37.868 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:37.868 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:37.868 06:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:38.127 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:38.127 "name": "BaseBdev2", 00:12:38.127 "aliases": [ 00:12:38.127 "f4a5ad37-95ce-4bcf-a57c-c21d645236f8" 00:12:38.127 ], 00:12:38.127 "product_name": "Malloc disk", 00:12:38.127 "block_size": 512, 00:12:38.127 "num_blocks": 65536, 00:12:38.127 "uuid": "f4a5ad37-95ce-4bcf-a57c-c21d645236f8", 00:12:38.127 "assigned_rate_limits": { 00:12:38.127 "rw_ios_per_sec": 0, 00:12:38.127 "rw_mbytes_per_sec": 0, 00:12:38.127 "r_mbytes_per_sec": 0, 00:12:38.127 "w_mbytes_per_sec": 0 00:12:38.127 }, 00:12:38.127 "claimed": true, 00:12:38.127 "claim_type": "exclusive_write", 00:12:38.127 "zoned": false, 00:12:38.127 "supported_io_types": { 00:12:38.127 "read": true, 00:12:38.127 "write": true, 00:12:38.127 "unmap": true, 00:12:38.127 "flush": true, 00:12:38.127 "reset": true, 00:12:38.128 "nvme_admin": false, 00:12:38.128 "nvme_io": false, 00:12:38.128 "nvme_io_md": false, 00:12:38.128 "write_zeroes": true, 00:12:38.128 "zcopy": true, 00:12:38.128 "get_zone_info": false, 00:12:38.128 "zone_management": false, 00:12:38.128 "zone_append": false, 00:12:38.128 "compare": false, 00:12:38.128 "compare_and_write": false, 00:12:38.128 "abort": true, 00:12:38.128 "seek_hole": false, 00:12:38.128 "seek_data": false, 00:12:38.128 "copy": true, 00:12:38.128 "nvme_iov_md": false 00:12:38.128 }, 00:12:38.128 "memory_domains": [ 00:12:38.128 { 00:12:38.128 "dma_device_id": "system", 00:12:38.128 "dma_device_type": 1 00:12:38.128 }, 00:12:38.128 { 00:12:38.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.128 "dma_device_type": 2 00:12:38.128 } 00:12:38.128 ], 00:12:38.128 "driver_specific": {} 00:12:38.128 }' 00:12:38.128 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.128 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.128 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:38.128 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.128 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.387 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:38.387 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.387 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.387 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:38.387 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.387 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.387 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:38.387 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:38.387 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:38.387 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:38.646 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:38.646 "name": "BaseBdev3", 00:12:38.646 "aliases": [ 00:12:38.646 "cd208054-3dbf-47b6-8a61-60fdfd76e309" 00:12:38.646 ], 00:12:38.646 "product_name": "Malloc disk", 00:12:38.646 "block_size": 512, 00:12:38.646 "num_blocks": 65536, 00:12:38.646 "uuid": "cd208054-3dbf-47b6-8a61-60fdfd76e309", 00:12:38.646 "assigned_rate_limits": { 00:12:38.646 "rw_ios_per_sec": 0, 00:12:38.646 "rw_mbytes_per_sec": 0, 00:12:38.646 "r_mbytes_per_sec": 0, 00:12:38.646 "w_mbytes_per_sec": 0 00:12:38.646 }, 00:12:38.646 "claimed": true, 00:12:38.646 "claim_type": "exclusive_write", 00:12:38.646 "zoned": false, 00:12:38.646 "supported_io_types": { 00:12:38.646 "read": true, 00:12:38.646 "write": true, 00:12:38.646 "unmap": true, 00:12:38.646 "flush": true, 00:12:38.646 "reset": true, 00:12:38.646 "nvme_admin": false, 00:12:38.646 "nvme_io": false, 00:12:38.646 "nvme_io_md": false, 00:12:38.646 "write_zeroes": true, 00:12:38.646 "zcopy": true, 00:12:38.646 "get_zone_info": false, 00:12:38.646 "zone_management": false, 00:12:38.646 "zone_append": false, 00:12:38.646 "compare": false, 00:12:38.646 "compare_and_write": false, 00:12:38.646 "abort": true, 00:12:38.646 "seek_hole": false, 00:12:38.646 "seek_data": false, 00:12:38.647 "copy": true, 00:12:38.647 "nvme_iov_md": false 00:12:38.647 }, 00:12:38.647 "memory_domains": [ 00:12:38.647 { 00:12:38.647 "dma_device_id": "system", 00:12:38.647 "dma_device_type": 1 00:12:38.647 }, 00:12:38.647 { 00:12:38.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.647 "dma_device_type": 2 00:12:38.647 } 00:12:38.647 ], 00:12:38.647 "driver_specific": {} 00:12:38.647 }' 00:12:38.647 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.647 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.647 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:38.647 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.647 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.906 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:38.906 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.906 06:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.906 06:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:38.906 06:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.906 06:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.906 06:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:38.906 06:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:39.166 [2024-08-14 06:44:06.276306] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:39.166 [2024-08-14 06:44:06.276350] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:39.166 [2024-08-14 06:44:06.276465] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.166 [2024-08-14 06:44:06.276762] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.166 [2024-08-14 06:44:06.276783] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:12:39.166 06:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 81615 00:12:39.166 06:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 81615 ']' 00:12:39.166 06:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 81615 00:12:39.166 06:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:12:39.166 06:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:39.166 06:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81615 00:12:39.166 killing process with pid 81615 00:12:39.166 06:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:39.166 06:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:39.166 06:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81615' 00:12:39.166 06:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 81615 00:12:39.166 [2024-08-14 06:44:06.335789] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:39.166 06:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 81615 00:12:39.166 [2024-08-14 06:44:06.366688] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:39.425 06:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:39.425 00:12:39.425 real 0m25.494s 00:12:39.425 user 0m47.352s 00:12:39.425 sys 0m3.835s 00:12:39.425 06:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:39.425 06:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.425 ************************************ 00:12:39.425 END TEST raid_state_function_test_sb 00:12:39.425 ************************************ 00:12:39.425 06:44:06 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:39.425 06:44:06 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:39.425 06:44:06 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:39.425 06:44:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:39.684 ************************************ 00:12:39.684 START TEST raid_superblock_test 00:12:39.684 ************************************ 00:12:39.684 06:44:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 3 00:12:39.684 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:12:39.684 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:12:39.684 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:12:39.684 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:12:39.684 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:12:39.684 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:12:39.684 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:12:39.684 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=82519 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 82519 /var/tmp/spdk-raid.sock 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 82519 ']' 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:39.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:39.685 06:44:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.685 [2024-08-14 06:44:06.764534] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:12:39.685 [2024-08-14 06:44:06.764674] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82519 ] 00:12:39.685 [2024-08-14 06:44:06.913349] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.943 [2024-08-14 06:44:06.962190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.943 [2024-08-14 06:44:07.004219] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.943 [2024-08-14 06:44:07.004259] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.517 06:44:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:40.517 06:44:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:12:40.517 06:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:12:40.517 06:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:40.517 06:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:12:40.517 06:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:12:40.517 06:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:40.517 06:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:40.517 06:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:12:40.517 06:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:40.517 06:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:40.780 malloc1 00:12:40.780 06:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:40.780 [2024-08-14 06:44:08.008324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:40.780 [2024-08-14 06:44:08.008418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.780 [2024-08-14 06:44:08.008446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:40.780 [2024-08-14 06:44:08.008456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.780 [2024-08-14 06:44:08.010746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.780 [2024-08-14 06:44:08.010787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:40.780 pt1 00:12:40.780 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:12:40.780 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:40.780 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:12:40.780 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:12:40.780 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:40.780 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:40.780 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:12:40.780 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:40.780 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:41.040 malloc2 00:12:41.040 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:41.298 [2024-08-14 06:44:08.420705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:41.298 [2024-08-14 06:44:08.420785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.298 [2024-08-14 06:44:08.420808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:41.298 [2024-08-14 06:44:08.420817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.298 [2024-08-14 06:44:08.423168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.298 [2024-08-14 06:44:08.423214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:41.298 pt2 00:12:41.298 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:12:41.298 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:41.298 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:12:41.298 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:12:41.298 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:41.298 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:41.298 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:12:41.298 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:41.299 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:41.558 malloc3 00:12:41.558 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:41.818 [2024-08-14 06:44:08.841267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:41.818 [2024-08-14 06:44:08.841343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.818 [2024-08-14 06:44:08.841369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:41.818 [2024-08-14 06:44:08.841378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.818 [2024-08-14 06:44:08.843673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.818 [2024-08-14 06:44:08.843707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:41.818 pt3 00:12:41.818 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:12:41.818 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:41.818 06:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:41.818 [2024-08-14 06:44:09.029034] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:41.818 [2024-08-14 06:44:09.030932] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:41.818 [2024-08-14 06:44:09.031000] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:41.818 [2024-08-14 06:44:09.031195] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:41.818 [2024-08-14 06:44:09.031243] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:41.818 [2024-08-14 06:44:09.031579] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:41.818 [2024-08-14 06:44:09.031760] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:41.818 [2024-08-14 06:44:09.031778] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:41.818 [2024-08-14 06:44:09.031930] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.818 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:41.818 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:41.818 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:41.818 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:41.818 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:41.818 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:41.818 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:41.818 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:41.818 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:41.818 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:41.818 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.818 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.078 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:42.078 "name": "raid_bdev1", 00:12:42.078 "uuid": "bdeccb9e-c910-451c-99de-509d52f0ff6f", 00:12:42.078 "strip_size_kb": 0, 00:12:42.078 "state": "online", 00:12:42.078 "raid_level": "raid1", 00:12:42.078 "superblock": true, 00:12:42.078 "num_base_bdevs": 3, 00:12:42.078 "num_base_bdevs_discovered": 3, 00:12:42.078 "num_base_bdevs_operational": 3, 00:12:42.078 "base_bdevs_list": [ 00:12:42.078 { 00:12:42.078 "name": "pt1", 00:12:42.078 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.078 "is_configured": true, 00:12:42.078 "data_offset": 2048, 00:12:42.078 "data_size": 63488 00:12:42.078 }, 00:12:42.078 { 00:12:42.078 "name": "pt2", 00:12:42.078 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.078 "is_configured": true, 00:12:42.078 "data_offset": 2048, 00:12:42.078 "data_size": 63488 00:12:42.078 }, 00:12:42.078 { 00:12:42.078 "name": "pt3", 00:12:42.078 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:42.078 "is_configured": true, 00:12:42.078 "data_offset": 2048, 00:12:42.078 "data_size": 63488 00:12:42.078 } 00:12:42.078 ] 00:12:42.078 }' 00:12:42.078 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:42.078 06:44:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.648 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:12:42.648 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:42.648 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:42.648 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:42.648 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:42.648 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:42.648 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:42.648 06:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:42.907 [2024-08-14 06:44:09.991607] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.907 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:42.908 "name": "raid_bdev1", 00:12:42.908 "aliases": [ 00:12:42.908 "bdeccb9e-c910-451c-99de-509d52f0ff6f" 00:12:42.908 ], 00:12:42.908 "product_name": "Raid Volume", 00:12:42.908 "block_size": 512, 00:12:42.908 "num_blocks": 63488, 00:12:42.908 "uuid": "bdeccb9e-c910-451c-99de-509d52f0ff6f", 00:12:42.908 "assigned_rate_limits": { 00:12:42.908 "rw_ios_per_sec": 0, 00:12:42.908 "rw_mbytes_per_sec": 0, 00:12:42.908 "r_mbytes_per_sec": 0, 00:12:42.908 "w_mbytes_per_sec": 0 00:12:42.908 }, 00:12:42.908 "claimed": false, 00:12:42.908 "zoned": false, 00:12:42.908 "supported_io_types": { 00:12:42.908 "read": true, 00:12:42.908 "write": true, 00:12:42.908 "unmap": false, 00:12:42.908 "flush": false, 00:12:42.908 "reset": true, 00:12:42.908 "nvme_admin": false, 00:12:42.908 "nvme_io": false, 00:12:42.908 "nvme_io_md": false, 00:12:42.908 "write_zeroes": true, 00:12:42.908 "zcopy": false, 00:12:42.908 "get_zone_info": false, 00:12:42.908 "zone_management": false, 00:12:42.908 "zone_append": false, 00:12:42.908 "compare": false, 00:12:42.908 "compare_and_write": false, 00:12:42.908 "abort": false, 00:12:42.908 "seek_hole": false, 00:12:42.908 "seek_data": false, 00:12:42.908 "copy": false, 00:12:42.908 "nvme_iov_md": false 00:12:42.908 }, 00:12:42.908 "memory_domains": [ 00:12:42.908 { 00:12:42.908 "dma_device_id": "system", 00:12:42.908 "dma_device_type": 1 00:12:42.908 }, 00:12:42.908 { 00:12:42.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.908 "dma_device_type": 2 00:12:42.908 }, 00:12:42.908 { 00:12:42.908 "dma_device_id": "system", 00:12:42.908 "dma_device_type": 1 00:12:42.908 }, 00:12:42.908 { 00:12:42.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.908 "dma_device_type": 2 00:12:42.908 }, 00:12:42.908 { 00:12:42.908 "dma_device_id": "system", 00:12:42.908 "dma_device_type": 1 00:12:42.908 }, 00:12:42.908 { 00:12:42.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.908 "dma_device_type": 2 00:12:42.908 } 00:12:42.908 ], 00:12:42.908 "driver_specific": { 00:12:42.908 "raid": { 00:12:42.908 "uuid": "bdeccb9e-c910-451c-99de-509d52f0ff6f", 00:12:42.908 "strip_size_kb": 0, 00:12:42.908 "state": "online", 00:12:42.908 "raid_level": "raid1", 00:12:42.908 "superblock": true, 00:12:42.908 "num_base_bdevs": 3, 00:12:42.908 "num_base_bdevs_discovered": 3, 00:12:42.908 "num_base_bdevs_operational": 3, 00:12:42.908 "base_bdevs_list": [ 00:12:42.908 { 00:12:42.908 "name": "pt1", 00:12:42.908 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.908 "is_configured": true, 00:12:42.908 "data_offset": 2048, 00:12:42.908 "data_size": 63488 00:12:42.908 }, 00:12:42.908 { 00:12:42.908 "name": "pt2", 00:12:42.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.908 "is_configured": true, 00:12:42.908 "data_offset": 2048, 00:12:42.908 "data_size": 63488 00:12:42.908 }, 00:12:42.908 { 00:12:42.908 "name": "pt3", 00:12:42.908 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:42.908 "is_configured": true, 00:12:42.908 "data_offset": 2048, 00:12:42.908 "data_size": 63488 00:12:42.908 } 00:12:42.908 ] 00:12:42.908 } 00:12:42.908 } 00:12:42.908 }' 00:12:42.908 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.908 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:42.908 pt2 00:12:42.908 pt3' 00:12:42.908 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:42.908 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:42.908 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:43.167 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:43.167 "name": "pt1", 00:12:43.167 "aliases": [ 00:12:43.167 "00000000-0000-0000-0000-000000000001" 00:12:43.167 ], 00:12:43.167 "product_name": "passthru", 00:12:43.167 "block_size": 512, 00:12:43.167 "num_blocks": 65536, 00:12:43.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:43.167 "assigned_rate_limits": { 00:12:43.167 "rw_ios_per_sec": 0, 00:12:43.167 "rw_mbytes_per_sec": 0, 00:12:43.167 "r_mbytes_per_sec": 0, 00:12:43.167 "w_mbytes_per_sec": 0 00:12:43.167 }, 00:12:43.167 "claimed": true, 00:12:43.167 "claim_type": "exclusive_write", 00:12:43.167 "zoned": false, 00:12:43.167 "supported_io_types": { 00:12:43.167 "read": true, 00:12:43.167 "write": true, 00:12:43.168 "unmap": true, 00:12:43.168 "flush": true, 00:12:43.168 "reset": true, 00:12:43.168 "nvme_admin": false, 00:12:43.168 "nvme_io": false, 00:12:43.168 "nvme_io_md": false, 00:12:43.168 "write_zeroes": true, 00:12:43.168 "zcopy": true, 00:12:43.168 "get_zone_info": false, 00:12:43.168 "zone_management": false, 00:12:43.168 "zone_append": false, 00:12:43.168 "compare": false, 00:12:43.168 "compare_and_write": false, 00:12:43.168 "abort": true, 00:12:43.168 "seek_hole": false, 00:12:43.168 "seek_data": false, 00:12:43.168 "copy": true, 00:12:43.168 "nvme_iov_md": false 00:12:43.168 }, 00:12:43.168 "memory_domains": [ 00:12:43.168 { 00:12:43.168 "dma_device_id": "system", 00:12:43.168 "dma_device_type": 1 00:12:43.168 }, 00:12:43.168 { 00:12:43.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.168 "dma_device_type": 2 00:12:43.168 } 00:12:43.168 ], 00:12:43.168 "driver_specific": { 00:12:43.168 "passthru": { 00:12:43.168 "name": "pt1", 00:12:43.168 "base_bdev_name": "malloc1" 00:12:43.168 } 00:12:43.168 } 00:12:43.168 }' 00:12:43.168 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.168 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.168 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:43.168 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.426 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.426 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:43.426 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.426 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.426 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:43.426 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.426 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.426 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:43.426 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:43.426 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:43.427 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:43.687 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:43.687 "name": "pt2", 00:12:43.687 "aliases": [ 00:12:43.687 "00000000-0000-0000-0000-000000000002" 00:12:43.687 ], 00:12:43.687 "product_name": "passthru", 00:12:43.687 "block_size": 512, 00:12:43.687 "num_blocks": 65536, 00:12:43.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:43.687 "assigned_rate_limits": { 00:12:43.687 "rw_ios_per_sec": 0, 00:12:43.687 "rw_mbytes_per_sec": 0, 00:12:43.687 "r_mbytes_per_sec": 0, 00:12:43.687 "w_mbytes_per_sec": 0 00:12:43.687 }, 00:12:43.687 "claimed": true, 00:12:43.687 "claim_type": "exclusive_write", 00:12:43.687 "zoned": false, 00:12:43.687 "supported_io_types": { 00:12:43.687 "read": true, 00:12:43.687 "write": true, 00:12:43.687 "unmap": true, 00:12:43.687 "flush": true, 00:12:43.687 "reset": true, 00:12:43.687 "nvme_admin": false, 00:12:43.687 "nvme_io": false, 00:12:43.687 "nvme_io_md": false, 00:12:43.687 "write_zeroes": true, 00:12:43.687 "zcopy": true, 00:12:43.687 "get_zone_info": false, 00:12:43.687 "zone_management": false, 00:12:43.687 "zone_append": false, 00:12:43.687 "compare": false, 00:12:43.687 "compare_and_write": false, 00:12:43.687 "abort": true, 00:12:43.687 "seek_hole": false, 00:12:43.687 "seek_data": false, 00:12:43.687 "copy": true, 00:12:43.687 "nvme_iov_md": false 00:12:43.687 }, 00:12:43.687 "memory_domains": [ 00:12:43.687 { 00:12:43.687 "dma_device_id": "system", 00:12:43.687 "dma_device_type": 1 00:12:43.687 }, 00:12:43.687 { 00:12:43.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.687 "dma_device_type": 2 00:12:43.687 } 00:12:43.687 ], 00:12:43.687 "driver_specific": { 00:12:43.687 "passthru": { 00:12:43.687 "name": "pt2", 00:12:43.687 "base_bdev_name": "malloc2" 00:12:43.687 } 00:12:43.687 } 00:12:43.687 }' 00:12:43.687 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.687 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.687 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:43.687 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.947 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.947 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:43.947 06:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.947 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.947 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:43.947 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.947 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.947 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:43.947 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:43.947 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:43.947 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:44.207 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:44.207 "name": "pt3", 00:12:44.207 "aliases": [ 00:12:44.207 "00000000-0000-0000-0000-000000000003" 00:12:44.207 ], 00:12:44.207 "product_name": "passthru", 00:12:44.207 "block_size": 512, 00:12:44.207 "num_blocks": 65536, 00:12:44.207 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:44.207 "assigned_rate_limits": { 00:12:44.207 "rw_ios_per_sec": 0, 00:12:44.207 "rw_mbytes_per_sec": 0, 00:12:44.207 "r_mbytes_per_sec": 0, 00:12:44.207 "w_mbytes_per_sec": 0 00:12:44.207 }, 00:12:44.207 "claimed": true, 00:12:44.207 "claim_type": "exclusive_write", 00:12:44.207 "zoned": false, 00:12:44.207 "supported_io_types": { 00:12:44.207 "read": true, 00:12:44.207 "write": true, 00:12:44.207 "unmap": true, 00:12:44.207 "flush": true, 00:12:44.207 "reset": true, 00:12:44.207 "nvme_admin": false, 00:12:44.207 "nvme_io": false, 00:12:44.207 "nvme_io_md": false, 00:12:44.207 "write_zeroes": true, 00:12:44.207 "zcopy": true, 00:12:44.207 "get_zone_info": false, 00:12:44.207 "zone_management": false, 00:12:44.207 "zone_append": false, 00:12:44.207 "compare": false, 00:12:44.207 "compare_and_write": false, 00:12:44.207 "abort": true, 00:12:44.207 "seek_hole": false, 00:12:44.207 "seek_data": false, 00:12:44.207 "copy": true, 00:12:44.207 "nvme_iov_md": false 00:12:44.207 }, 00:12:44.207 "memory_domains": [ 00:12:44.207 { 00:12:44.207 "dma_device_id": "system", 00:12:44.207 "dma_device_type": 1 00:12:44.207 }, 00:12:44.207 { 00:12:44.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.207 "dma_device_type": 2 00:12:44.207 } 00:12:44.207 ], 00:12:44.207 "driver_specific": { 00:12:44.207 "passthru": { 00:12:44.207 "name": "pt3", 00:12:44.207 "base_bdev_name": "malloc3" 00:12:44.207 } 00:12:44.207 } 00:12:44.207 }' 00:12:44.207 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.207 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.207 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:44.207 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.207 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.466 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:44.467 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.467 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.467 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:44.467 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.467 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.467 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:44.467 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:12:44.467 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:44.726 [2024-08-14 06:44:11.840474] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.726 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=bdeccb9e-c910-451c-99de-509d52f0ff6f 00:12:44.726 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z bdeccb9e-c910-451c-99de-509d52f0ff6f ']' 00:12:44.726 06:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:44.984 [2024-08-14 06:44:12.059820] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:44.984 [2024-08-14 06:44:12.059866] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.984 [2024-08-14 06:44:12.059951] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.984 [2024-08-14 06:44:12.060032] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.984 [2024-08-14 06:44:12.060043] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:44.984 06:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.984 06:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:12:45.244 06:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:12:45.244 06:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:12:45.244 06:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:12:45.244 06:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:45.244 06:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:12:45.244 06:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:45.503 06:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:12:45.503 06:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:45.764 06:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:45.764 06:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:46.025 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:12:46.025 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:46.025 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:12:46.025 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:46.025 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.025 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:12:46.025 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.025 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:12:46.025 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.025 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:12:46.025 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.025 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:46.025 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:46.285 [2024-08-14 06:44:13.285730] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:46.285 [2024-08-14 06:44:13.287667] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:46.285 [2024-08-14 06:44:13.287777] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:46.285 [2024-08-14 06:44:13.287850] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:46.285 [2024-08-14 06:44:13.287974] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:46.285 [2024-08-14 06:44:13.288032] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:46.285 [2024-08-14 06:44:13.288080] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.285 [2024-08-14 06:44:13.288139] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:12:46.285 request: 00:12:46.285 { 00:12:46.285 "name": "raid_bdev1", 00:12:46.285 "raid_level": "raid1", 00:12:46.285 "base_bdevs": [ 00:12:46.285 "malloc1", 00:12:46.285 "malloc2", 00:12:46.285 "malloc3" 00:12:46.285 ], 00:12:46.285 "superblock": false, 00:12:46.285 "method": "bdev_raid_create", 00:12:46.285 "req_id": 1 00:12:46.285 } 00:12:46.285 Got JSON-RPC error response 00:12:46.285 response: 00:12:46.285 { 00:12:46.285 "code": -17, 00:12:46.285 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:46.285 } 00:12:46.285 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:12:46.285 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:12:46.285 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:12:46.285 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:12:46.285 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.285 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:12:46.285 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:12:46.285 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:12:46.285 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:46.545 [2024-08-14 06:44:13.685006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:46.545 [2024-08-14 06:44:13.685162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.545 [2024-08-14 06:44:13.685213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:46.545 [2024-08-14 06:44:13.685243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.545 [2024-08-14 06:44:13.687450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.545 [2024-08-14 06:44:13.687533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:46.545 [2024-08-14 06:44:13.687665] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:46.545 [2024-08-14 06:44:13.687746] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:46.545 pt1 00:12:46.545 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:46.545 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:46.545 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:46.545 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:46.545 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:46.545 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:46.545 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:46.545 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:46.545 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:46.545 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:46.545 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.545 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.803 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:46.803 "name": "raid_bdev1", 00:12:46.803 "uuid": "bdeccb9e-c910-451c-99de-509d52f0ff6f", 00:12:46.803 "strip_size_kb": 0, 00:12:46.803 "state": "configuring", 00:12:46.803 "raid_level": "raid1", 00:12:46.803 "superblock": true, 00:12:46.803 "num_base_bdevs": 3, 00:12:46.803 "num_base_bdevs_discovered": 1, 00:12:46.803 "num_base_bdevs_operational": 3, 00:12:46.803 "base_bdevs_list": [ 00:12:46.803 { 00:12:46.804 "name": "pt1", 00:12:46.804 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:46.804 "is_configured": true, 00:12:46.804 "data_offset": 2048, 00:12:46.804 "data_size": 63488 00:12:46.804 }, 00:12:46.804 { 00:12:46.804 "name": null, 00:12:46.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:46.804 "is_configured": false, 00:12:46.804 "data_offset": 2048, 00:12:46.804 "data_size": 63488 00:12:46.804 }, 00:12:46.804 { 00:12:46.804 "name": null, 00:12:46.804 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:46.804 "is_configured": false, 00:12:46.804 "data_offset": 2048, 00:12:46.804 "data_size": 63488 00:12:46.804 } 00:12:46.804 ] 00:12:46.804 }' 00:12:46.804 06:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:46.804 06:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.372 06:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:12:47.372 06:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:47.632 [2024-08-14 06:44:14.643399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:47.632 [2024-08-14 06:44:14.643483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.632 [2024-08-14 06:44:14.643507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:47.632 [2024-08-14 06:44:14.643516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.632 [2024-08-14 06:44:14.643922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.632 [2024-08-14 06:44:14.643939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:47.632 [2024-08-14 06:44:14.644016] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:47.632 [2024-08-14 06:44:14.644036] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:47.632 pt2 00:12:47.632 06:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:47.632 [2024-08-14 06:44:14.867048] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:47.892 06:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:47.892 06:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:47.892 06:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:47.892 06:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:47.892 06:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:47.892 06:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:47.892 06:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.892 06:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.892 06:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.892 06:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:47.892 06:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.892 06:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.892 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:47.892 "name": "raid_bdev1", 00:12:47.892 "uuid": "bdeccb9e-c910-451c-99de-509d52f0ff6f", 00:12:47.892 "strip_size_kb": 0, 00:12:47.892 "state": "configuring", 00:12:47.892 "raid_level": "raid1", 00:12:47.892 "superblock": true, 00:12:47.892 "num_base_bdevs": 3, 00:12:47.892 "num_base_bdevs_discovered": 1, 00:12:47.892 "num_base_bdevs_operational": 3, 00:12:47.892 "base_bdevs_list": [ 00:12:47.892 { 00:12:47.892 "name": "pt1", 00:12:47.892 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:47.892 "is_configured": true, 00:12:47.892 "data_offset": 2048, 00:12:47.892 "data_size": 63488 00:12:47.892 }, 00:12:47.892 { 00:12:47.892 "name": null, 00:12:47.892 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.892 "is_configured": false, 00:12:47.892 "data_offset": 2048, 00:12:47.892 "data_size": 63488 00:12:47.892 }, 00:12:47.892 { 00:12:47.892 "name": null, 00:12:47.892 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.892 "is_configured": false, 00:12:47.892 "data_offset": 2048, 00:12:47.892 "data_size": 63488 00:12:47.892 } 00:12:47.892 ] 00:12:47.892 }' 00:12:47.892 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:47.892 06:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.461 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:12:48.461 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:12:48.461 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:48.719 [2024-08-14 06:44:15.761465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:48.719 [2024-08-14 06:44:15.761540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.719 [2024-08-14 06:44:15.761559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:48.719 [2024-08-14 06:44:15.761570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.719 [2024-08-14 06:44:15.761987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.719 [2024-08-14 06:44:15.762006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:48.719 [2024-08-14 06:44:15.762082] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:48.719 [2024-08-14 06:44:15.762107] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:48.719 pt2 00:12:48.719 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:12:48.719 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:12:48.719 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:48.719 [2024-08-14 06:44:15.969113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:48.719 [2024-08-14 06:44:15.969219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.719 [2024-08-14 06:44:15.969239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:48.719 [2024-08-14 06:44:15.969252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.720 [2024-08-14 06:44:15.969657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.720 [2024-08-14 06:44:15.969690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:48.720 [2024-08-14 06:44:15.969772] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:48.720 [2024-08-14 06:44:15.969797] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:48.720 [2024-08-14 06:44:15.969920] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:12:48.720 [2024-08-14 06:44:15.969934] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:48.720 [2024-08-14 06:44:15.970218] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:12:48.720 [2024-08-14 06:44:15.970406] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:12:48.720 [2024-08-14 06:44:15.970420] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:12:48.720 [2024-08-14 06:44:15.970525] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.979 pt3 00:12:48.979 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:12:48.979 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:12:48.979 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:48.979 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:48.979 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:48.979 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:48.979 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:48.979 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:48.979 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:48.979 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:48.979 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:48.979 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:48.979 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.979 06:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.979 06:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:48.979 "name": "raid_bdev1", 00:12:48.979 "uuid": "bdeccb9e-c910-451c-99de-509d52f0ff6f", 00:12:48.979 "strip_size_kb": 0, 00:12:48.979 "state": "online", 00:12:48.979 "raid_level": "raid1", 00:12:48.979 "superblock": true, 00:12:48.979 "num_base_bdevs": 3, 00:12:48.979 "num_base_bdevs_discovered": 3, 00:12:48.979 "num_base_bdevs_operational": 3, 00:12:48.979 "base_bdevs_list": [ 00:12:48.979 { 00:12:48.979 "name": "pt1", 00:12:48.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:48.979 "is_configured": true, 00:12:48.979 "data_offset": 2048, 00:12:48.979 "data_size": 63488 00:12:48.979 }, 00:12:48.979 { 00:12:48.979 "name": "pt2", 00:12:48.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:48.979 "is_configured": true, 00:12:48.979 "data_offset": 2048, 00:12:48.979 "data_size": 63488 00:12:48.979 }, 00:12:48.979 { 00:12:48.979 "name": "pt3", 00:12:48.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:48.979 "is_configured": true, 00:12:48.979 "data_offset": 2048, 00:12:48.979 "data_size": 63488 00:12:48.979 } 00:12:48.979 ] 00:12:48.979 }' 00:12:48.979 06:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:48.979 06:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.546 06:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:12:49.547 06:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:49.547 06:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:49.547 06:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:49.547 06:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:49.547 06:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:49.547 06:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:49.547 06:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:49.806 [2024-08-14 06:44:16.931786] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:49.806 06:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:49.806 "name": "raid_bdev1", 00:12:49.806 "aliases": [ 00:12:49.806 "bdeccb9e-c910-451c-99de-509d52f0ff6f" 00:12:49.806 ], 00:12:49.806 "product_name": "Raid Volume", 00:12:49.806 "block_size": 512, 00:12:49.806 "num_blocks": 63488, 00:12:49.806 "uuid": "bdeccb9e-c910-451c-99de-509d52f0ff6f", 00:12:49.806 "assigned_rate_limits": { 00:12:49.806 "rw_ios_per_sec": 0, 00:12:49.806 "rw_mbytes_per_sec": 0, 00:12:49.806 "r_mbytes_per_sec": 0, 00:12:49.806 "w_mbytes_per_sec": 0 00:12:49.806 }, 00:12:49.806 "claimed": false, 00:12:49.806 "zoned": false, 00:12:49.806 "supported_io_types": { 00:12:49.806 "read": true, 00:12:49.806 "write": true, 00:12:49.806 "unmap": false, 00:12:49.806 "flush": false, 00:12:49.806 "reset": true, 00:12:49.806 "nvme_admin": false, 00:12:49.806 "nvme_io": false, 00:12:49.806 "nvme_io_md": false, 00:12:49.806 "write_zeroes": true, 00:12:49.806 "zcopy": false, 00:12:49.806 "get_zone_info": false, 00:12:49.806 "zone_management": false, 00:12:49.806 "zone_append": false, 00:12:49.806 "compare": false, 00:12:49.806 "compare_and_write": false, 00:12:49.806 "abort": false, 00:12:49.806 "seek_hole": false, 00:12:49.806 "seek_data": false, 00:12:49.806 "copy": false, 00:12:49.806 "nvme_iov_md": false 00:12:49.806 }, 00:12:49.806 "memory_domains": [ 00:12:49.806 { 00:12:49.806 "dma_device_id": "system", 00:12:49.806 "dma_device_type": 1 00:12:49.806 }, 00:12:49.806 { 00:12:49.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.806 "dma_device_type": 2 00:12:49.806 }, 00:12:49.806 { 00:12:49.806 "dma_device_id": "system", 00:12:49.806 "dma_device_type": 1 00:12:49.806 }, 00:12:49.806 { 00:12:49.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.806 "dma_device_type": 2 00:12:49.806 }, 00:12:49.806 { 00:12:49.806 "dma_device_id": "system", 00:12:49.806 "dma_device_type": 1 00:12:49.806 }, 00:12:49.806 { 00:12:49.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.806 "dma_device_type": 2 00:12:49.806 } 00:12:49.806 ], 00:12:49.806 "driver_specific": { 00:12:49.806 "raid": { 00:12:49.806 "uuid": "bdeccb9e-c910-451c-99de-509d52f0ff6f", 00:12:49.806 "strip_size_kb": 0, 00:12:49.806 "state": "online", 00:12:49.806 "raid_level": "raid1", 00:12:49.806 "superblock": true, 00:12:49.806 "num_base_bdevs": 3, 00:12:49.806 "num_base_bdevs_discovered": 3, 00:12:49.806 "num_base_bdevs_operational": 3, 00:12:49.806 "base_bdevs_list": [ 00:12:49.806 { 00:12:49.806 "name": "pt1", 00:12:49.806 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:49.806 "is_configured": true, 00:12:49.806 "data_offset": 2048, 00:12:49.806 "data_size": 63488 00:12:49.806 }, 00:12:49.806 { 00:12:49.806 "name": "pt2", 00:12:49.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:49.806 "is_configured": true, 00:12:49.806 "data_offset": 2048, 00:12:49.806 "data_size": 63488 00:12:49.806 }, 00:12:49.806 { 00:12:49.806 "name": "pt3", 00:12:49.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:49.806 "is_configured": true, 00:12:49.806 "data_offset": 2048, 00:12:49.806 "data_size": 63488 00:12:49.806 } 00:12:49.806 ] 00:12:49.806 } 00:12:49.806 } 00:12:49.806 }' 00:12:49.806 06:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:49.806 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:49.806 pt2 00:12:49.806 pt3' 00:12:49.806 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:49.806 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:49.806 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:50.078 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:50.078 "name": "pt1", 00:12:50.078 "aliases": [ 00:12:50.078 "00000000-0000-0000-0000-000000000001" 00:12:50.078 ], 00:12:50.078 "product_name": "passthru", 00:12:50.078 "block_size": 512, 00:12:50.078 "num_blocks": 65536, 00:12:50.078 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:50.078 "assigned_rate_limits": { 00:12:50.078 "rw_ios_per_sec": 0, 00:12:50.078 "rw_mbytes_per_sec": 0, 00:12:50.078 "r_mbytes_per_sec": 0, 00:12:50.078 "w_mbytes_per_sec": 0 00:12:50.078 }, 00:12:50.078 "claimed": true, 00:12:50.078 "claim_type": "exclusive_write", 00:12:50.078 "zoned": false, 00:12:50.078 "supported_io_types": { 00:12:50.078 "read": true, 00:12:50.078 "write": true, 00:12:50.078 "unmap": true, 00:12:50.078 "flush": true, 00:12:50.078 "reset": true, 00:12:50.078 "nvme_admin": false, 00:12:50.078 "nvme_io": false, 00:12:50.078 "nvme_io_md": false, 00:12:50.079 "write_zeroes": true, 00:12:50.079 "zcopy": true, 00:12:50.079 "get_zone_info": false, 00:12:50.079 "zone_management": false, 00:12:50.079 "zone_append": false, 00:12:50.079 "compare": false, 00:12:50.079 "compare_and_write": false, 00:12:50.079 "abort": true, 00:12:50.079 "seek_hole": false, 00:12:50.079 "seek_data": false, 00:12:50.079 "copy": true, 00:12:50.079 "nvme_iov_md": false 00:12:50.079 }, 00:12:50.079 "memory_domains": [ 00:12:50.079 { 00:12:50.079 "dma_device_id": "system", 00:12:50.079 "dma_device_type": 1 00:12:50.079 }, 00:12:50.079 { 00:12:50.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.079 "dma_device_type": 2 00:12:50.079 } 00:12:50.079 ], 00:12:50.079 "driver_specific": { 00:12:50.079 "passthru": { 00:12:50.079 "name": "pt1", 00:12:50.079 "base_bdev_name": "malloc1" 00:12:50.079 } 00:12:50.079 } 00:12:50.079 }' 00:12:50.079 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.079 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.079 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:50.079 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.362 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.362 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:50.362 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.362 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.362 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:50.362 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.362 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.362 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:50.362 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:50.362 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:50.362 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:50.622 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:50.622 "name": "pt2", 00:12:50.622 "aliases": [ 00:12:50.622 "00000000-0000-0000-0000-000000000002" 00:12:50.622 ], 00:12:50.622 "product_name": "passthru", 00:12:50.622 "block_size": 512, 00:12:50.622 "num_blocks": 65536, 00:12:50.622 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:50.622 "assigned_rate_limits": { 00:12:50.622 "rw_ios_per_sec": 0, 00:12:50.622 "rw_mbytes_per_sec": 0, 00:12:50.622 "r_mbytes_per_sec": 0, 00:12:50.622 "w_mbytes_per_sec": 0 00:12:50.622 }, 00:12:50.622 "claimed": true, 00:12:50.622 "claim_type": "exclusive_write", 00:12:50.622 "zoned": false, 00:12:50.622 "supported_io_types": { 00:12:50.622 "read": true, 00:12:50.622 "write": true, 00:12:50.622 "unmap": true, 00:12:50.622 "flush": true, 00:12:50.622 "reset": true, 00:12:50.622 "nvme_admin": false, 00:12:50.622 "nvme_io": false, 00:12:50.622 "nvme_io_md": false, 00:12:50.622 "write_zeroes": true, 00:12:50.622 "zcopy": true, 00:12:50.622 "get_zone_info": false, 00:12:50.622 "zone_management": false, 00:12:50.622 "zone_append": false, 00:12:50.622 "compare": false, 00:12:50.622 "compare_and_write": false, 00:12:50.622 "abort": true, 00:12:50.622 "seek_hole": false, 00:12:50.622 "seek_data": false, 00:12:50.622 "copy": true, 00:12:50.622 "nvme_iov_md": false 00:12:50.622 }, 00:12:50.622 "memory_domains": [ 00:12:50.622 { 00:12:50.622 "dma_device_id": "system", 00:12:50.622 "dma_device_type": 1 00:12:50.622 }, 00:12:50.622 { 00:12:50.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.622 "dma_device_type": 2 00:12:50.622 } 00:12:50.622 ], 00:12:50.622 "driver_specific": { 00:12:50.622 "passthru": { 00:12:50.622 "name": "pt2", 00:12:50.622 "base_bdev_name": "malloc2" 00:12:50.622 } 00:12:50.622 } 00:12:50.622 }' 00:12:50.622 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.622 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.622 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:50.622 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.622 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.882 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:50.882 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.882 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.882 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:50.882 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.882 06:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.882 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:50.882 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:50.882 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:50.882 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:51.165 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:51.165 "name": "pt3", 00:12:51.165 "aliases": [ 00:12:51.165 "00000000-0000-0000-0000-000000000003" 00:12:51.165 ], 00:12:51.165 "product_name": "passthru", 00:12:51.165 "block_size": 512, 00:12:51.165 "num_blocks": 65536, 00:12:51.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:51.165 "assigned_rate_limits": { 00:12:51.165 "rw_ios_per_sec": 0, 00:12:51.165 "rw_mbytes_per_sec": 0, 00:12:51.165 "r_mbytes_per_sec": 0, 00:12:51.165 "w_mbytes_per_sec": 0 00:12:51.165 }, 00:12:51.165 "claimed": true, 00:12:51.165 "claim_type": "exclusive_write", 00:12:51.165 "zoned": false, 00:12:51.165 "supported_io_types": { 00:12:51.165 "read": true, 00:12:51.165 "write": true, 00:12:51.165 "unmap": true, 00:12:51.165 "flush": true, 00:12:51.165 "reset": true, 00:12:51.165 "nvme_admin": false, 00:12:51.165 "nvme_io": false, 00:12:51.165 "nvme_io_md": false, 00:12:51.165 "write_zeroes": true, 00:12:51.165 "zcopy": true, 00:12:51.165 "get_zone_info": false, 00:12:51.165 "zone_management": false, 00:12:51.165 "zone_append": false, 00:12:51.165 "compare": false, 00:12:51.165 "compare_and_write": false, 00:12:51.165 "abort": true, 00:12:51.165 "seek_hole": false, 00:12:51.165 "seek_data": false, 00:12:51.165 "copy": true, 00:12:51.165 "nvme_iov_md": false 00:12:51.165 }, 00:12:51.165 "memory_domains": [ 00:12:51.165 { 00:12:51.165 "dma_device_id": "system", 00:12:51.165 "dma_device_type": 1 00:12:51.165 }, 00:12:51.165 { 00:12:51.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.165 "dma_device_type": 2 00:12:51.165 } 00:12:51.165 ], 00:12:51.165 "driver_specific": { 00:12:51.165 "passthru": { 00:12:51.165 "name": "pt3", 00:12:51.165 "base_bdev_name": "malloc3" 00:12:51.165 } 00:12:51.165 } 00:12:51.165 }' 00:12:51.165 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:51.165 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:51.165 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:51.165 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:51.165 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:51.165 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:51.426 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:51.426 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:51.426 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:51.426 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:51.426 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:51.426 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:51.426 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:51.426 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:12:51.686 [2024-08-14 06:44:18.764662] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.686 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' bdeccb9e-c910-451c-99de-509d52f0ff6f '!=' bdeccb9e-c910-451c-99de-509d52f0ff6f ']' 00:12:51.686 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:12:51.686 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:51.686 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:51.686 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:51.946 [2024-08-14 06:44:18.968112] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:51.946 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:51.946 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:51.946 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:51.946 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:51.946 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:51.946 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:51.946 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:51.946 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:51.946 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:51.946 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:51.946 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.946 06:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.946 06:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:51.946 "name": "raid_bdev1", 00:12:51.946 "uuid": "bdeccb9e-c910-451c-99de-509d52f0ff6f", 00:12:51.946 "strip_size_kb": 0, 00:12:51.946 "state": "online", 00:12:51.946 "raid_level": "raid1", 00:12:51.946 "superblock": true, 00:12:51.946 "num_base_bdevs": 3, 00:12:51.946 "num_base_bdevs_discovered": 2, 00:12:51.946 "num_base_bdevs_operational": 2, 00:12:51.946 "base_bdevs_list": [ 00:12:51.946 { 00:12:51.946 "name": null, 00:12:51.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.946 "is_configured": false, 00:12:51.946 "data_offset": 2048, 00:12:51.946 "data_size": 63488 00:12:51.946 }, 00:12:51.946 { 00:12:51.946 "name": "pt2", 00:12:51.946 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:51.946 "is_configured": true, 00:12:51.946 "data_offset": 2048, 00:12:51.946 "data_size": 63488 00:12:51.946 }, 00:12:51.946 { 00:12:51.946 "name": "pt3", 00:12:51.946 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:51.946 "is_configured": true, 00:12:51.946 "data_offset": 2048, 00:12:51.946 "data_size": 63488 00:12:51.946 } 00:12:51.946 ] 00:12:51.946 }' 00:12:51.946 06:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:51.946 06:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.516 06:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:52.776 [2024-08-14 06:44:19.874499] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:52.776 [2024-08-14 06:44:19.874594] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.776 [2024-08-14 06:44:19.874698] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.776 [2024-08-14 06:44:19.874773] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.776 [2024-08-14 06:44:19.874825] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:12:52.776 06:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.776 06:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:12:53.036 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:12:53.036 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:12:53.036 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:12:53.036 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:12:53.036 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:53.295 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:12:53.295 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:12:53.295 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:53.295 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:12:53.295 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:12:53.295 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:12:53.295 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:12:53.295 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:53.555 [2024-08-14 06:44:20.677089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:53.555 [2024-08-14 06:44:20.677180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.555 [2024-08-14 06:44:20.677198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:53.555 [2024-08-14 06:44:20.677209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.555 [2024-08-14 06:44:20.679259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.555 [2024-08-14 06:44:20.679297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:53.555 [2024-08-14 06:44:20.679376] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:53.555 [2024-08-14 06:44:20.679410] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:53.555 pt2 00:12:53.555 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:53.555 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:53.555 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:53.555 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:53.555 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:53.555 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:53.555 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:53.555 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:53.555 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:53.555 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:53.555 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.555 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.815 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:53.815 "name": "raid_bdev1", 00:12:53.815 "uuid": "bdeccb9e-c910-451c-99de-509d52f0ff6f", 00:12:53.815 "strip_size_kb": 0, 00:12:53.815 "state": "configuring", 00:12:53.815 "raid_level": "raid1", 00:12:53.815 "superblock": true, 00:12:53.815 "num_base_bdevs": 3, 00:12:53.815 "num_base_bdevs_discovered": 1, 00:12:53.815 "num_base_bdevs_operational": 2, 00:12:53.815 "base_bdevs_list": [ 00:12:53.815 { 00:12:53.815 "name": null, 00:12:53.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.815 "is_configured": false, 00:12:53.815 "data_offset": 2048, 00:12:53.815 "data_size": 63488 00:12:53.815 }, 00:12:53.815 { 00:12:53.815 "name": "pt2", 00:12:53.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:53.815 "is_configured": true, 00:12:53.815 "data_offset": 2048, 00:12:53.815 "data_size": 63488 00:12:53.815 }, 00:12:53.815 { 00:12:53.815 "name": null, 00:12:53.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:53.815 "is_configured": false, 00:12:53.815 "data_offset": 2048, 00:12:53.815 "data_size": 63488 00:12:53.815 } 00:12:53.815 ] 00:12:53.815 }' 00:12:53.815 06:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:53.815 06:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.384 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:12:54.384 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:12:54.384 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:12:54.384 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:54.384 [2024-08-14 06:44:21.627494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:54.384 [2024-08-14 06:44:21.627654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.384 [2024-08-14 06:44:21.627691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:54.384 [2024-08-14 06:44:21.627720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.384 [2024-08-14 06:44:21.628156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.384 [2024-08-14 06:44:21.628236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:54.384 [2024-08-14 06:44:21.628346] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:54.384 [2024-08-14 06:44:21.628407] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:54.384 [2024-08-14 06:44:21.628538] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:12:54.384 [2024-08-14 06:44:21.628577] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:54.384 [2024-08-14 06:44:21.628824] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:54.384 [2024-08-14 06:44:21.628995] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:12:54.384 [2024-08-14 06:44:21.629038] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:12:54.384 [2024-08-14 06:44:21.629194] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.384 pt3 00:12:54.644 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:54.644 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:54.644 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:54.644 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:54.644 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:54.644 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:54.644 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:54.644 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:54.644 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:54.644 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:54.644 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.644 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.644 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:54.644 "name": "raid_bdev1", 00:12:54.644 "uuid": "bdeccb9e-c910-451c-99de-509d52f0ff6f", 00:12:54.644 "strip_size_kb": 0, 00:12:54.644 "state": "online", 00:12:54.644 "raid_level": "raid1", 00:12:54.644 "superblock": true, 00:12:54.644 "num_base_bdevs": 3, 00:12:54.644 "num_base_bdevs_discovered": 2, 00:12:54.644 "num_base_bdevs_operational": 2, 00:12:54.644 "base_bdevs_list": [ 00:12:54.644 { 00:12:54.644 "name": null, 00:12:54.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.644 "is_configured": false, 00:12:54.644 "data_offset": 2048, 00:12:54.644 "data_size": 63488 00:12:54.644 }, 00:12:54.644 { 00:12:54.644 "name": "pt2", 00:12:54.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:54.644 "is_configured": true, 00:12:54.644 "data_offset": 2048, 00:12:54.644 "data_size": 63488 00:12:54.644 }, 00:12:54.644 { 00:12:54.644 "name": "pt3", 00:12:54.644 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:54.644 "is_configured": true, 00:12:54.644 "data_offset": 2048, 00:12:54.644 "data_size": 63488 00:12:54.644 } 00:12:54.644 ] 00:12:54.644 }' 00:12:54.644 06:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:54.644 06:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.213 06:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:55.473 [2024-08-14 06:44:22.553902] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.473 [2024-08-14 06:44:22.554020] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.473 [2024-08-14 06:44:22.554117] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.473 [2024-08-14 06:44:22.554222] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.473 [2024-08-14 06:44:22.554270] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:12:55.473 06:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:12:55.473 06:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.733 06:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:12:55.733 06:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:12:55.733 06:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:12:55.733 06:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:12:55.733 06:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:55.733 06:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:55.992 [2024-08-14 06:44:23.132909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:55.992 [2024-08-14 06:44:23.132976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.992 [2024-08-14 06:44:23.132998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:55.993 [2024-08-14 06:44:23.133007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.993 [2024-08-14 06:44:23.135202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.993 [2024-08-14 06:44:23.135237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:55.993 [2024-08-14 06:44:23.135322] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:55.993 [2024-08-14 06:44:23.135370] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:55.993 [2024-08-14 06:44:23.135483] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:55.993 [2024-08-14 06:44:23.135493] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.993 [2024-08-14 06:44:23.135524] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:12:55.993 [2024-08-14 06:44:23.135560] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:55.993 pt1 00:12:55.993 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:12:55.993 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:55.993 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:55.993 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:55.993 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:55.993 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:55.993 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:55.993 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:55.993 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:55.993 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:55.993 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:55.993 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.993 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.252 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:56.252 "name": "raid_bdev1", 00:12:56.252 "uuid": "bdeccb9e-c910-451c-99de-509d52f0ff6f", 00:12:56.252 "strip_size_kb": 0, 00:12:56.252 "state": "configuring", 00:12:56.252 "raid_level": "raid1", 00:12:56.252 "superblock": true, 00:12:56.252 "num_base_bdevs": 3, 00:12:56.252 "num_base_bdevs_discovered": 1, 00:12:56.252 "num_base_bdevs_operational": 2, 00:12:56.252 "base_bdevs_list": [ 00:12:56.252 { 00:12:56.252 "name": null, 00:12:56.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.252 "is_configured": false, 00:12:56.252 "data_offset": 2048, 00:12:56.252 "data_size": 63488 00:12:56.252 }, 00:12:56.252 { 00:12:56.252 "name": "pt2", 00:12:56.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.252 "is_configured": true, 00:12:56.252 "data_offset": 2048, 00:12:56.252 "data_size": 63488 00:12:56.252 }, 00:12:56.252 { 00:12:56.252 "name": null, 00:12:56.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.252 "is_configured": false, 00:12:56.252 "data_offset": 2048, 00:12:56.252 "data_size": 63488 00:12:56.252 } 00:12:56.252 ] 00:12:56.252 }' 00:12:56.252 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:56.252 06:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.822 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:12:56.822 06:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:56.822 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:12:56.822 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:57.082 [2024-08-14 06:44:24.254950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:57.082 [2024-08-14 06:44:24.255032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.082 [2024-08-14 06:44:24.255052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:57.082 [2024-08-14 06:44:24.255062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.082 [2024-08-14 06:44:24.255470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.082 [2024-08-14 06:44:24.255491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:57.082 [2024-08-14 06:44:24.255574] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:57.082 [2024-08-14 06:44:24.255595] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:57.082 [2024-08-14 06:44:24.255693] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:12:57.082 [2024-08-14 06:44:24.255701] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:57.082 [2024-08-14 06:44:24.255935] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:12:57.082 [2024-08-14 06:44:24.256056] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:12:57.082 [2024-08-14 06:44:24.256068] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:12:57.082 [2024-08-14 06:44:24.256163] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.082 pt3 00:12:57.082 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:57.082 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:57.082 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:57.082 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:57.082 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:57.082 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:57.082 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:57.082 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:57.082 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:57.082 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:57.082 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.082 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.342 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:57.342 "name": "raid_bdev1", 00:12:57.342 "uuid": "bdeccb9e-c910-451c-99de-509d52f0ff6f", 00:12:57.342 "strip_size_kb": 0, 00:12:57.342 "state": "online", 00:12:57.342 "raid_level": "raid1", 00:12:57.342 "superblock": true, 00:12:57.342 "num_base_bdevs": 3, 00:12:57.342 "num_base_bdevs_discovered": 2, 00:12:57.342 "num_base_bdevs_operational": 2, 00:12:57.342 "base_bdevs_list": [ 00:12:57.342 { 00:12:57.342 "name": null, 00:12:57.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.342 "is_configured": false, 00:12:57.342 "data_offset": 2048, 00:12:57.342 "data_size": 63488 00:12:57.342 }, 00:12:57.342 { 00:12:57.342 "name": "pt2", 00:12:57.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.342 "is_configured": true, 00:12:57.342 "data_offset": 2048, 00:12:57.342 "data_size": 63488 00:12:57.342 }, 00:12:57.342 { 00:12:57.342 "name": "pt3", 00:12:57.342 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.342 "is_configured": true, 00:12:57.342 "data_offset": 2048, 00:12:57.342 "data_size": 63488 00:12:57.342 } 00:12:57.342 ] 00:12:57.342 }' 00:12:57.342 06:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:57.342 06:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.936 06:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:12:57.936 06:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:58.196 06:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:12:58.196 06:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:12:58.196 06:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:58.196 [2024-08-14 06:44:25.409295] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.196 06:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' bdeccb9e-c910-451c-99de-509d52f0ff6f '!=' bdeccb9e-c910-451c-99de-509d52f0ff6f ']' 00:12:58.196 06:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 82519 00:12:58.196 06:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 82519 ']' 00:12:58.196 06:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 82519 00:12:58.196 06:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:12:58.196 06:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:58.196 06:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82519 00:12:58.455 killing process with pid 82519 00:12:58.455 06:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:58.455 06:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:58.455 06:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82519' 00:12:58.455 06:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 82519 00:12:58.455 [2024-08-14 06:44:25.457753] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:58.455 [2024-08-14 06:44:25.457850] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.455 [2024-08-14 06:44:25.457910] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 06:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 82519 00:12:58.455 ee all in destruct 00:12:58.455 [2024-08-14 06:44:25.457924] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:12:58.455 [2024-08-14 06:44:25.490071] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:58.715 06:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:12:58.715 00:12:58.715 real 0m19.044s 00:12:58.715 user 0m35.204s 00:12:58.715 sys 0m2.874s 00:12:58.715 06:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:58.715 06:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.715 ************************************ 00:12:58.715 END TEST raid_superblock_test 00:12:58.715 ************************************ 00:12:58.715 06:44:25 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:58.715 06:44:25 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:58.715 06:44:25 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:58.715 06:44:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:58.715 ************************************ 00:12:58.715 START TEST raid_read_error_test 00:12:58.715 ************************************ 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 3 read 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.XgSjrVG2Fd 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=83198 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 83198 /var/tmp/spdk-raid.sock 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 83198 ']' 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:58.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:58.715 06:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.715 [2024-08-14 06:44:25.891500] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:12:58.715 [2024-08-14 06:44:25.891614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83198 ] 00:12:58.975 [2024-08-14 06:44:26.018801] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.975 [2024-08-14 06:44:26.062107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.975 [2024-08-14 06:44:26.103721] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.975 [2024-08-14 06:44:26.103763] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.543 06:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:59.543 06:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:12:59.543 06:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:59.543 06:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:59.803 BaseBdev1_malloc 00:12:59.803 06:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:00.062 true 00:13:00.062 06:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:00.063 [2024-08-14 06:44:27.290842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:00.063 [2024-08-14 06:44:27.290931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.063 [2024-08-14 06:44:27.290954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:13:00.063 [2024-08-14 06:44:27.290966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.063 [2024-08-14 06:44:27.293120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.063 [2024-08-14 06:44:27.293165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:00.063 BaseBdev1 00:13:00.063 06:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:00.063 06:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:00.322 BaseBdev2_malloc 00:13:00.322 06:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:00.582 true 00:13:00.582 06:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:00.843 [2024-08-14 06:44:27.878593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:00.843 [2024-08-14 06:44:27.878688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.843 [2024-08-14 06:44:27.878710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:13:00.843 [2024-08-14 06:44:27.878721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.843 [2024-08-14 06:44:27.880926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.843 [2024-08-14 06:44:27.880964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:00.843 BaseBdev2 00:13:00.843 06:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:00.843 06:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:00.843 BaseBdev3_malloc 00:13:00.843 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:01.102 true 00:13:01.102 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:01.362 [2024-08-14 06:44:28.477863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:01.362 [2024-08-14 06:44:28.477950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.362 [2024-08-14 06:44:28.477975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:13:01.362 [2024-08-14 06:44:28.477986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.362 [2024-08-14 06:44:28.480116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.362 [2024-08-14 06:44:28.480155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:01.362 BaseBdev3 00:13:01.362 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:01.622 [2024-08-14 06:44:28.685563] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.622 [2024-08-14 06:44:28.687422] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.622 [2024-08-14 06:44:28.687498] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.622 [2024-08-14 06:44:28.687696] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:01.622 [2024-08-14 06:44:28.687732] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:01.622 [2024-08-14 06:44:28.688066] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:01.622 [2024-08-14 06:44:28.688231] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:01.622 [2024-08-14 06:44:28.688250] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:13:01.622 [2024-08-14 06:44:28.688399] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.622 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:01.622 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:01.622 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:01.622 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:01.622 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:01.622 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:01.622 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:01.622 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:01.622 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:01.622 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:01.622 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.622 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.882 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:01.882 "name": "raid_bdev1", 00:13:01.882 "uuid": "c27ae7c1-444e-4fd9-81cb-2f1eec6f9c0d", 00:13:01.882 "strip_size_kb": 0, 00:13:01.882 "state": "online", 00:13:01.882 "raid_level": "raid1", 00:13:01.882 "superblock": true, 00:13:01.882 "num_base_bdevs": 3, 00:13:01.882 "num_base_bdevs_discovered": 3, 00:13:01.882 "num_base_bdevs_operational": 3, 00:13:01.882 "base_bdevs_list": [ 00:13:01.882 { 00:13:01.882 "name": "BaseBdev1", 00:13:01.882 "uuid": "05579c64-67f3-5961-a5c6-f3b2ea60e040", 00:13:01.882 "is_configured": true, 00:13:01.882 "data_offset": 2048, 00:13:01.882 "data_size": 63488 00:13:01.882 }, 00:13:01.882 { 00:13:01.882 "name": "BaseBdev2", 00:13:01.882 "uuid": "dc0a813d-0e8e-56a9-b5a9-8b3acfa29bdd", 00:13:01.882 "is_configured": true, 00:13:01.882 "data_offset": 2048, 00:13:01.882 "data_size": 63488 00:13:01.882 }, 00:13:01.882 { 00:13:01.882 "name": "BaseBdev3", 00:13:01.882 "uuid": "e9e21ad4-722b-57de-a698-cdb01abd2866", 00:13:01.882 "is_configured": true, 00:13:01.882 "data_offset": 2048, 00:13:01.882 "data_size": 63488 00:13:01.882 } 00:13:01.882 ] 00:13:01.882 }' 00:13:01.882 06:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:01.882 06:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.452 06:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:13:02.452 06:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:02.452 [2024-08-14 06:44:29.524469] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:03.392 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:03.392 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:13:03.392 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:03.392 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:13:03.392 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:13:03.392 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:03.653 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:03.653 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:03.653 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:03.653 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:03.653 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:03.653 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:03.653 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:03.653 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:03.653 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:03.653 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.653 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.653 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:03.653 "name": "raid_bdev1", 00:13:03.653 "uuid": "c27ae7c1-444e-4fd9-81cb-2f1eec6f9c0d", 00:13:03.653 "strip_size_kb": 0, 00:13:03.653 "state": "online", 00:13:03.653 "raid_level": "raid1", 00:13:03.653 "superblock": true, 00:13:03.653 "num_base_bdevs": 3, 00:13:03.653 "num_base_bdevs_discovered": 3, 00:13:03.653 "num_base_bdevs_operational": 3, 00:13:03.653 "base_bdevs_list": [ 00:13:03.653 { 00:13:03.653 "name": "BaseBdev1", 00:13:03.653 "uuid": "05579c64-67f3-5961-a5c6-f3b2ea60e040", 00:13:03.653 "is_configured": true, 00:13:03.653 "data_offset": 2048, 00:13:03.653 "data_size": 63488 00:13:03.653 }, 00:13:03.653 { 00:13:03.653 "name": "BaseBdev2", 00:13:03.653 "uuid": "dc0a813d-0e8e-56a9-b5a9-8b3acfa29bdd", 00:13:03.653 "is_configured": true, 00:13:03.653 "data_offset": 2048, 00:13:03.653 "data_size": 63488 00:13:03.653 }, 00:13:03.653 { 00:13:03.653 "name": "BaseBdev3", 00:13:03.653 "uuid": "e9e21ad4-722b-57de-a698-cdb01abd2866", 00:13:03.653 "is_configured": true, 00:13:03.653 "data_offset": 2048, 00:13:03.653 "data_size": 63488 00:13:03.653 } 00:13:03.653 ] 00:13:03.653 }' 00:13:03.653 06:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:03.653 06:44:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.223 06:44:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:04.483 [2024-08-14 06:44:31.600737] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:04.483 [2024-08-14 06:44:31.600791] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.483 [2024-08-14 06:44:31.603292] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.483 [2024-08-14 06:44:31.603350] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.483 [2024-08-14 06:44:31.603476] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.483 [2024-08-14 06:44:31.603488] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:13:04.483 0 00:13:04.483 06:44:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 83198 00:13:04.483 06:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 83198 ']' 00:13:04.483 06:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 83198 00:13:04.483 06:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:13:04.483 06:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:04.483 06:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83198 00:13:04.483 06:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:04.484 06:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:04.484 killing process with pid 83198 00:13:04.484 06:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83198' 00:13:04.484 06:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 83198 00:13:04.484 [2024-08-14 06:44:31.659497] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:04.484 06:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 83198 00:13:04.484 [2024-08-14 06:44:31.708325] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.055 06:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.XgSjrVG2Fd 00:13:05.055 06:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:13:05.055 06:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:13:05.055 06:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:13:05.055 06:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:13:05.055 06:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:05.055 06:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:05.055 06:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:05.055 00:13:05.055 real 0m6.288s 00:13:05.055 user 0m9.787s 00:13:05.055 sys 0m0.842s 00:13:05.055 06:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:05.055 06:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.055 ************************************ 00:13:05.055 END TEST raid_read_error_test 00:13:05.055 ************************************ 00:13:05.055 06:44:32 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:13:05.055 06:44:32 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:05.055 06:44:32 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:05.055 06:44:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.055 ************************************ 00:13:05.055 START TEST raid_write_error_test 00:13:05.055 ************************************ 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 3 write 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.ccv35Di687 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=83372 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 83372 /var/tmp/spdk-raid.sock 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 83372 ']' 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:05.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:05.055 06:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.055 [2024-08-14 06:44:32.275498] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:13:05.055 [2024-08-14 06:44:32.275695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83372 ] 00:13:05.315 [2024-08-14 06:44:32.427838] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.315 [2024-08-14 06:44:32.502912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.574 [2024-08-14 06:44:32.580339] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.574 [2024-08-14 06:44:32.580383] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.144 06:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:06.144 06:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:13:06.144 06:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:06.144 06:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:06.144 BaseBdev1_malloc 00:13:06.144 06:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:06.405 true 00:13:06.405 06:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:06.664 [2024-08-14 06:44:33.688203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:06.664 [2024-08-14 06:44:33.688324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.664 [2024-08-14 06:44:33.688354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:13:06.664 [2024-08-14 06:44:33.688371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.664 [2024-08-14 06:44:33.691134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.664 [2024-08-14 06:44:33.691193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:06.664 BaseBdev1 00:13:06.664 06:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:06.664 06:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:06.664 BaseBdev2_malloc 00:13:06.664 06:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:06.922 true 00:13:06.922 06:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:07.182 [2024-08-14 06:44:34.238844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:07.182 [2024-08-14 06:44:34.238964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.182 [2024-08-14 06:44:34.238994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:13:07.182 [2024-08-14 06:44:34.239008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.182 [2024-08-14 06:44:34.241695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.182 [2024-08-14 06:44:34.241739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:07.182 BaseBdev2 00:13:07.182 06:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:07.182 06:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:07.442 BaseBdev3_malloc 00:13:07.442 06:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:07.442 true 00:13:07.442 06:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:07.702 [2024-08-14 06:44:34.816507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:07.702 [2024-08-14 06:44:34.816608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.702 [2024-08-14 06:44:34.816636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:13:07.702 [2024-08-14 06:44:34.816650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.702 [2024-08-14 06:44:34.819469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.702 [2024-08-14 06:44:34.819519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:07.702 BaseBdev3 00:13:07.702 06:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:07.962 [2024-08-14 06:44:34.992360] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.962 [2024-08-14 06:44:34.994626] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.962 [2024-08-14 06:44:34.994713] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:07.962 [2024-08-14 06:44:34.994956] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:07.962 [2024-08-14 06:44:34.994976] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:07.962 [2024-08-14 06:44:34.995365] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:07.962 [2024-08-14 06:44:34.995558] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:07.962 [2024-08-14 06:44:34.995580] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:13:07.962 [2024-08-14 06:44:34.995772] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.962 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:07.962 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:07.962 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:07.962 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:07.962 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:07.962 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:07.962 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:07.962 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:07.962 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:07.962 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:07.962 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.962 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.221 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:08.221 "name": "raid_bdev1", 00:13:08.221 "uuid": "0749042f-a65d-4084-a5a6-ae6190475479", 00:13:08.221 "strip_size_kb": 0, 00:13:08.221 "state": "online", 00:13:08.221 "raid_level": "raid1", 00:13:08.221 "superblock": true, 00:13:08.221 "num_base_bdevs": 3, 00:13:08.221 "num_base_bdevs_discovered": 3, 00:13:08.221 "num_base_bdevs_operational": 3, 00:13:08.221 "base_bdevs_list": [ 00:13:08.221 { 00:13:08.221 "name": "BaseBdev1", 00:13:08.221 "uuid": "2f4ee317-f59a-5cc5-967c-b70c0d92f2a9", 00:13:08.221 "is_configured": true, 00:13:08.221 "data_offset": 2048, 00:13:08.221 "data_size": 63488 00:13:08.221 }, 00:13:08.221 { 00:13:08.221 "name": "BaseBdev2", 00:13:08.221 "uuid": "77077fd8-3b50-5556-8f6b-34b903e494ef", 00:13:08.221 "is_configured": true, 00:13:08.221 "data_offset": 2048, 00:13:08.221 "data_size": 63488 00:13:08.221 }, 00:13:08.221 { 00:13:08.221 "name": "BaseBdev3", 00:13:08.221 "uuid": "335484a7-f41a-5911-8ca5-2f0eeec31db2", 00:13:08.221 "is_configured": true, 00:13:08.221 "data_offset": 2048, 00:13:08.221 "data_size": 63488 00:13:08.221 } 00:13:08.221 ] 00:13:08.221 }' 00:13:08.221 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:08.221 06:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.791 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:13:08.791 06:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:08.791 [2024-08-14 06:44:35.823417] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:09.731 [2024-08-14 06:44:36.934908] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:09.731 [2024-08-14 06:44:36.934985] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.731 [2024-08-14 06:44:36.935257] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002600 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=2 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.731 06:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.990 06:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:09.990 "name": "raid_bdev1", 00:13:09.990 "uuid": "0749042f-a65d-4084-a5a6-ae6190475479", 00:13:09.990 "strip_size_kb": 0, 00:13:09.990 "state": "online", 00:13:09.990 "raid_level": "raid1", 00:13:09.990 "superblock": true, 00:13:09.990 "num_base_bdevs": 3, 00:13:09.990 "num_base_bdevs_discovered": 2, 00:13:09.990 "num_base_bdevs_operational": 2, 00:13:09.990 "base_bdevs_list": [ 00:13:09.990 { 00:13:09.990 "name": null, 00:13:09.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.990 "is_configured": false, 00:13:09.990 "data_offset": 2048, 00:13:09.990 "data_size": 63488 00:13:09.990 }, 00:13:09.990 { 00:13:09.990 "name": "BaseBdev2", 00:13:09.990 "uuid": "77077fd8-3b50-5556-8f6b-34b903e494ef", 00:13:09.990 "is_configured": true, 00:13:09.990 "data_offset": 2048, 00:13:09.990 "data_size": 63488 00:13:09.990 }, 00:13:09.990 { 00:13:09.990 "name": "BaseBdev3", 00:13:09.990 "uuid": "335484a7-f41a-5911-8ca5-2f0eeec31db2", 00:13:09.990 "is_configured": true, 00:13:09.990 "data_offset": 2048, 00:13:09.990 "data_size": 63488 00:13:09.990 } 00:13:09.990 ] 00:13:09.990 }' 00:13:09.990 06:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:09.990 06:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.558 06:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:10.818 [2024-08-14 06:44:37.858032] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:10.818 [2024-08-14 06:44:37.858092] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.818 [2024-08-14 06:44:37.860571] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.818 [2024-08-14 06:44:37.860636] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.818 [2024-08-14 06:44:37.860731] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.818 [2024-08-14 06:44:37.860751] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:13:10.818 0 00:13:10.818 06:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 83372 00:13:10.818 06:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 83372 ']' 00:13:10.818 06:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 83372 00:13:10.818 06:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:13:10.818 06:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:10.818 06:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83372 00:13:10.818 06:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:10.818 06:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:10.818 06:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83372' 00:13:10.818 killing process with pid 83372 00:13:10.818 06:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 83372 00:13:10.818 [2024-08-14 06:44:37.916037] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:10.818 06:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 83372 00:13:10.818 [2024-08-14 06:44:37.965475] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.387 06:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.ccv35Di687 00:13:11.387 06:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:13:11.387 06:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:13:11.387 06:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:13:11.387 06:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:13:11.387 06:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:11.387 06:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:11.387 06:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:11.387 00:13:11.387 real 0m6.187s 00:13:11.387 user 0m9.422s 00:13:11.387 sys 0m0.975s 00:13:11.387 06:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:11.387 06:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.387 ************************************ 00:13:11.387 END TEST raid_write_error_test 00:13:11.387 ************************************ 00:13:11.387 06:44:38 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:13:11.387 06:44:38 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:13:11.387 06:44:38 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:13:11.387 06:44:38 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:11.387 06:44:38 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:11.387 06:44:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.387 ************************************ 00:13:11.387 START TEST raid_state_function_test 00:13:11.387 ************************************ 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 false 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=83539 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 83539' 00:13:11.387 Process raid pid: 83539 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 83539 /var/tmp/spdk-raid.sock 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 83539 ']' 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:11.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:11.387 06:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.387 [2024-08-14 06:44:38.503197] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:13:11.387 [2024-08-14 06:44:38.503365] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.387 [2024-08-14 06:44:38.634232] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.647 [2024-08-14 06:44:38.711663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.647 [2024-08-14 06:44:38.788174] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.647 [2024-08-14 06:44:38.788323] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.217 06:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:12.217 06:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:13:12.217 06:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:12.477 [2024-08-14 06:44:39.548713] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:12.477 [2024-08-14 06:44:39.548889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:12.477 [2024-08-14 06:44:39.548924] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:12.477 [2024-08-14 06:44:39.548954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:12.477 [2024-08-14 06:44:39.548978] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:12.477 [2024-08-14 06:44:39.549014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:12.477 [2024-08-14 06:44:39.549038] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:12.477 [2024-08-14 06:44:39.549141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:12.477 06:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:12.477 06:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:12.477 06:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:12.477 06:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:12.477 06:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:12.477 06:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:12.477 06:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:12.477 06:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:12.477 06:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:12.477 06:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:12.477 06:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.477 06:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.737 06:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:12.737 "name": "Existed_Raid", 00:13:12.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.737 "strip_size_kb": 64, 00:13:12.737 "state": "configuring", 00:13:12.737 "raid_level": "raid0", 00:13:12.737 "superblock": false, 00:13:12.737 "num_base_bdevs": 4, 00:13:12.737 "num_base_bdevs_discovered": 0, 00:13:12.737 "num_base_bdevs_operational": 4, 00:13:12.737 "base_bdevs_list": [ 00:13:12.737 { 00:13:12.737 "name": "BaseBdev1", 00:13:12.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.737 "is_configured": false, 00:13:12.737 "data_offset": 0, 00:13:12.737 "data_size": 0 00:13:12.737 }, 00:13:12.737 { 00:13:12.737 "name": "BaseBdev2", 00:13:12.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.737 "is_configured": false, 00:13:12.737 "data_offset": 0, 00:13:12.737 "data_size": 0 00:13:12.737 }, 00:13:12.737 { 00:13:12.737 "name": "BaseBdev3", 00:13:12.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.737 "is_configured": false, 00:13:12.737 "data_offset": 0, 00:13:12.737 "data_size": 0 00:13:12.738 }, 00:13:12.738 { 00:13:12.738 "name": "BaseBdev4", 00:13:12.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.738 "is_configured": false, 00:13:12.738 "data_offset": 0, 00:13:12.738 "data_size": 0 00:13:12.738 } 00:13:12.738 ] 00:13:12.738 }' 00:13:12.738 06:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:12.738 06:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.344 06:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:13.344 [2024-08-14 06:44:40.443030] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.344 [2024-08-14 06:44:40.443216] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:13.344 06:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:13.619 [2024-08-14 06:44:40.642743] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.619 [2024-08-14 06:44:40.642924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.619 [2024-08-14 06:44:40.642958] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.619 [2024-08-14 06:44:40.642978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.619 [2024-08-14 06:44:40.642998] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:13.619 [2024-08-14 06:44:40.643016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:13.619 [2024-08-14 06:44:40.643037] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:13.620 [2024-08-14 06:44:40.643055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:13.620 06:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:13.620 [2024-08-14 06:44:40.841549] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.620 BaseBdev1 00:13:13.620 06:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:13.620 06:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:13.620 06:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:13.620 06:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:13.620 06:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:13.620 06:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:13.620 06:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:13.883 06:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:14.142 [ 00:13:14.142 { 00:13:14.142 "name": "BaseBdev1", 00:13:14.142 "aliases": [ 00:13:14.142 "a0be4080-9dfe-4113-a1a8-32741389568c" 00:13:14.142 ], 00:13:14.142 "product_name": "Malloc disk", 00:13:14.142 "block_size": 512, 00:13:14.142 "num_blocks": 65536, 00:13:14.142 "uuid": "a0be4080-9dfe-4113-a1a8-32741389568c", 00:13:14.142 "assigned_rate_limits": { 00:13:14.142 "rw_ios_per_sec": 0, 00:13:14.142 "rw_mbytes_per_sec": 0, 00:13:14.142 "r_mbytes_per_sec": 0, 00:13:14.142 "w_mbytes_per_sec": 0 00:13:14.142 }, 00:13:14.142 "claimed": true, 00:13:14.142 "claim_type": "exclusive_write", 00:13:14.142 "zoned": false, 00:13:14.142 "supported_io_types": { 00:13:14.142 "read": true, 00:13:14.142 "write": true, 00:13:14.142 "unmap": true, 00:13:14.142 "flush": true, 00:13:14.142 "reset": true, 00:13:14.142 "nvme_admin": false, 00:13:14.142 "nvme_io": false, 00:13:14.142 "nvme_io_md": false, 00:13:14.142 "write_zeroes": true, 00:13:14.142 "zcopy": true, 00:13:14.142 "get_zone_info": false, 00:13:14.142 "zone_management": false, 00:13:14.142 "zone_append": false, 00:13:14.142 "compare": false, 00:13:14.142 "compare_and_write": false, 00:13:14.142 "abort": true, 00:13:14.142 "seek_hole": false, 00:13:14.142 "seek_data": false, 00:13:14.142 "copy": true, 00:13:14.142 "nvme_iov_md": false 00:13:14.142 }, 00:13:14.142 "memory_domains": [ 00:13:14.142 { 00:13:14.142 "dma_device_id": "system", 00:13:14.142 "dma_device_type": 1 00:13:14.142 }, 00:13:14.142 { 00:13:14.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.142 "dma_device_type": 2 00:13:14.142 } 00:13:14.142 ], 00:13:14.142 "driver_specific": {} 00:13:14.142 } 00:13:14.142 ] 00:13:14.142 06:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:14.142 06:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:14.142 06:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:14.142 06:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:14.142 06:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:14.142 06:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:14.142 06:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:14.142 06:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:14.142 06:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:14.142 06:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:14.142 06:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:14.143 06:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.143 06:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.402 06:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:14.402 "name": "Existed_Raid", 00:13:14.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.402 "strip_size_kb": 64, 00:13:14.402 "state": "configuring", 00:13:14.402 "raid_level": "raid0", 00:13:14.402 "superblock": false, 00:13:14.402 "num_base_bdevs": 4, 00:13:14.402 "num_base_bdevs_discovered": 1, 00:13:14.402 "num_base_bdevs_operational": 4, 00:13:14.402 "base_bdevs_list": [ 00:13:14.402 { 00:13:14.402 "name": "BaseBdev1", 00:13:14.402 "uuid": "a0be4080-9dfe-4113-a1a8-32741389568c", 00:13:14.402 "is_configured": true, 00:13:14.402 "data_offset": 0, 00:13:14.402 "data_size": 65536 00:13:14.402 }, 00:13:14.402 { 00:13:14.402 "name": "BaseBdev2", 00:13:14.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.402 "is_configured": false, 00:13:14.402 "data_offset": 0, 00:13:14.402 "data_size": 0 00:13:14.402 }, 00:13:14.402 { 00:13:14.402 "name": "BaseBdev3", 00:13:14.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.402 "is_configured": false, 00:13:14.402 "data_offset": 0, 00:13:14.402 "data_size": 0 00:13:14.402 }, 00:13:14.402 { 00:13:14.402 "name": "BaseBdev4", 00:13:14.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.402 "is_configured": false, 00:13:14.402 "data_offset": 0, 00:13:14.402 "data_size": 0 00:13:14.402 } 00:13:14.402 ] 00:13:14.402 }' 00:13:14.402 06:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:14.402 06:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.970 06:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:14.970 [2024-08-14 06:44:42.115567] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.970 [2024-08-14 06:44:42.115773] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:14.970 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:15.230 [2024-08-14 06:44:42.311341] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.230 [2024-08-14 06:44:42.313576] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:15.230 [2024-08-14 06:44:42.313625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:15.230 [2024-08-14 06:44:42.313637] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:15.230 [2024-08-14 06:44:42.313671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:15.230 [2024-08-14 06:44:42.313680] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:15.230 [2024-08-14 06:44:42.313687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:15.230 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:15.230 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:15.230 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:15.230 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:15.230 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:15.230 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:15.230 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:15.230 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:15.230 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:15.230 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:15.230 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:15.230 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:15.230 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.230 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.490 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:15.490 "name": "Existed_Raid", 00:13:15.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.490 "strip_size_kb": 64, 00:13:15.490 "state": "configuring", 00:13:15.490 "raid_level": "raid0", 00:13:15.490 "superblock": false, 00:13:15.490 "num_base_bdevs": 4, 00:13:15.490 "num_base_bdevs_discovered": 1, 00:13:15.490 "num_base_bdevs_operational": 4, 00:13:15.490 "base_bdevs_list": [ 00:13:15.490 { 00:13:15.490 "name": "BaseBdev1", 00:13:15.490 "uuid": "a0be4080-9dfe-4113-a1a8-32741389568c", 00:13:15.490 "is_configured": true, 00:13:15.490 "data_offset": 0, 00:13:15.490 "data_size": 65536 00:13:15.490 }, 00:13:15.490 { 00:13:15.490 "name": "BaseBdev2", 00:13:15.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.490 "is_configured": false, 00:13:15.490 "data_offset": 0, 00:13:15.490 "data_size": 0 00:13:15.490 }, 00:13:15.490 { 00:13:15.490 "name": "BaseBdev3", 00:13:15.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.490 "is_configured": false, 00:13:15.490 "data_offset": 0, 00:13:15.490 "data_size": 0 00:13:15.490 }, 00:13:15.490 { 00:13:15.490 "name": "BaseBdev4", 00:13:15.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.490 "is_configured": false, 00:13:15.490 "data_offset": 0, 00:13:15.490 "data_size": 0 00:13:15.490 } 00:13:15.490 ] 00:13:15.490 }' 00:13:15.490 06:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:15.490 06:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.058 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:16.058 [2024-08-14 06:44:43.222725] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.058 BaseBdev2 00:13:16.058 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:16.058 06:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:16.058 06:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:16.058 06:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:16.058 06:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:16.058 06:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:16.058 06:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:16.318 06:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:16.578 [ 00:13:16.578 { 00:13:16.578 "name": "BaseBdev2", 00:13:16.578 "aliases": [ 00:13:16.578 "f558b300-bc96-498d-a5d1-254c6430afac" 00:13:16.578 ], 00:13:16.578 "product_name": "Malloc disk", 00:13:16.578 "block_size": 512, 00:13:16.578 "num_blocks": 65536, 00:13:16.578 "uuid": "f558b300-bc96-498d-a5d1-254c6430afac", 00:13:16.578 "assigned_rate_limits": { 00:13:16.578 "rw_ios_per_sec": 0, 00:13:16.578 "rw_mbytes_per_sec": 0, 00:13:16.578 "r_mbytes_per_sec": 0, 00:13:16.578 "w_mbytes_per_sec": 0 00:13:16.578 }, 00:13:16.578 "claimed": true, 00:13:16.578 "claim_type": "exclusive_write", 00:13:16.578 "zoned": false, 00:13:16.578 "supported_io_types": { 00:13:16.578 "read": true, 00:13:16.578 "write": true, 00:13:16.578 "unmap": true, 00:13:16.578 "flush": true, 00:13:16.578 "reset": true, 00:13:16.578 "nvme_admin": false, 00:13:16.578 "nvme_io": false, 00:13:16.578 "nvme_io_md": false, 00:13:16.578 "write_zeroes": true, 00:13:16.578 "zcopy": true, 00:13:16.578 "get_zone_info": false, 00:13:16.578 "zone_management": false, 00:13:16.578 "zone_append": false, 00:13:16.578 "compare": false, 00:13:16.578 "compare_and_write": false, 00:13:16.578 "abort": true, 00:13:16.578 "seek_hole": false, 00:13:16.578 "seek_data": false, 00:13:16.578 "copy": true, 00:13:16.578 "nvme_iov_md": false 00:13:16.578 }, 00:13:16.578 "memory_domains": [ 00:13:16.578 { 00:13:16.578 "dma_device_id": "system", 00:13:16.578 "dma_device_type": 1 00:13:16.578 }, 00:13:16.578 { 00:13:16.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.578 "dma_device_type": 2 00:13:16.578 } 00:13:16.578 ], 00:13:16.578 "driver_specific": {} 00:13:16.578 } 00:13:16.578 ] 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:16.578 "name": "Existed_Raid", 00:13:16.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.578 "strip_size_kb": 64, 00:13:16.578 "state": "configuring", 00:13:16.578 "raid_level": "raid0", 00:13:16.578 "superblock": false, 00:13:16.578 "num_base_bdevs": 4, 00:13:16.578 "num_base_bdevs_discovered": 2, 00:13:16.578 "num_base_bdevs_operational": 4, 00:13:16.578 "base_bdevs_list": [ 00:13:16.578 { 00:13:16.578 "name": "BaseBdev1", 00:13:16.578 "uuid": "a0be4080-9dfe-4113-a1a8-32741389568c", 00:13:16.578 "is_configured": true, 00:13:16.578 "data_offset": 0, 00:13:16.578 "data_size": 65536 00:13:16.578 }, 00:13:16.578 { 00:13:16.578 "name": "BaseBdev2", 00:13:16.578 "uuid": "f558b300-bc96-498d-a5d1-254c6430afac", 00:13:16.578 "is_configured": true, 00:13:16.578 "data_offset": 0, 00:13:16.578 "data_size": 65536 00:13:16.578 }, 00:13:16.578 { 00:13:16.578 "name": "BaseBdev3", 00:13:16.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.578 "is_configured": false, 00:13:16.578 "data_offset": 0, 00:13:16.578 "data_size": 0 00:13:16.578 }, 00:13:16.578 { 00:13:16.578 "name": "BaseBdev4", 00:13:16.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.578 "is_configured": false, 00:13:16.578 "data_offset": 0, 00:13:16.578 "data_size": 0 00:13:16.578 } 00:13:16.578 ] 00:13:16.578 }' 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:16.578 06:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.148 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:17.408 [2024-08-14 06:44:44.453790] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.408 BaseBdev3 00:13:17.408 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:17.408 06:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:13:17.408 06:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:17.408 06:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:17.408 06:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:17.408 06:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:17.408 06:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:17.668 06:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:17.668 [ 00:13:17.668 { 00:13:17.668 "name": "BaseBdev3", 00:13:17.668 "aliases": [ 00:13:17.668 "393ce69c-bef0-418d-8ba8-253d21bdf9e8" 00:13:17.668 ], 00:13:17.668 "product_name": "Malloc disk", 00:13:17.668 "block_size": 512, 00:13:17.668 "num_blocks": 65536, 00:13:17.668 "uuid": "393ce69c-bef0-418d-8ba8-253d21bdf9e8", 00:13:17.668 "assigned_rate_limits": { 00:13:17.668 "rw_ios_per_sec": 0, 00:13:17.668 "rw_mbytes_per_sec": 0, 00:13:17.668 "r_mbytes_per_sec": 0, 00:13:17.668 "w_mbytes_per_sec": 0 00:13:17.668 }, 00:13:17.668 "claimed": true, 00:13:17.668 "claim_type": "exclusive_write", 00:13:17.668 "zoned": false, 00:13:17.668 "supported_io_types": { 00:13:17.668 "read": true, 00:13:17.668 "write": true, 00:13:17.668 "unmap": true, 00:13:17.668 "flush": true, 00:13:17.668 "reset": true, 00:13:17.668 "nvme_admin": false, 00:13:17.668 "nvme_io": false, 00:13:17.668 "nvme_io_md": false, 00:13:17.668 "write_zeroes": true, 00:13:17.668 "zcopy": true, 00:13:17.668 "get_zone_info": false, 00:13:17.668 "zone_management": false, 00:13:17.668 "zone_append": false, 00:13:17.668 "compare": false, 00:13:17.668 "compare_and_write": false, 00:13:17.668 "abort": true, 00:13:17.668 "seek_hole": false, 00:13:17.668 "seek_data": false, 00:13:17.668 "copy": true, 00:13:17.668 "nvme_iov_md": false 00:13:17.668 }, 00:13:17.668 "memory_domains": [ 00:13:17.668 { 00:13:17.668 "dma_device_id": "system", 00:13:17.668 "dma_device_type": 1 00:13:17.668 }, 00:13:17.668 { 00:13:17.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.668 "dma_device_type": 2 00:13:17.668 } 00:13:17.668 ], 00:13:17.668 "driver_specific": {} 00:13:17.668 } 00:13:17.668 ] 00:13:17.668 06:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:17.668 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:17.668 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:17.668 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:17.668 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:17.668 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:17.668 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:17.668 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:17.668 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:17.668 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:17.668 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:17.668 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:17.668 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:17.668 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.669 06:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.928 06:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:17.928 "name": "Existed_Raid", 00:13:17.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.928 "strip_size_kb": 64, 00:13:17.928 "state": "configuring", 00:13:17.928 "raid_level": "raid0", 00:13:17.928 "superblock": false, 00:13:17.928 "num_base_bdevs": 4, 00:13:17.928 "num_base_bdevs_discovered": 3, 00:13:17.928 "num_base_bdevs_operational": 4, 00:13:17.928 "base_bdevs_list": [ 00:13:17.928 { 00:13:17.928 "name": "BaseBdev1", 00:13:17.928 "uuid": "a0be4080-9dfe-4113-a1a8-32741389568c", 00:13:17.928 "is_configured": true, 00:13:17.928 "data_offset": 0, 00:13:17.928 "data_size": 65536 00:13:17.928 }, 00:13:17.928 { 00:13:17.928 "name": "BaseBdev2", 00:13:17.928 "uuid": "f558b300-bc96-498d-a5d1-254c6430afac", 00:13:17.928 "is_configured": true, 00:13:17.928 "data_offset": 0, 00:13:17.928 "data_size": 65536 00:13:17.928 }, 00:13:17.928 { 00:13:17.928 "name": "BaseBdev3", 00:13:17.928 "uuid": "393ce69c-bef0-418d-8ba8-253d21bdf9e8", 00:13:17.928 "is_configured": true, 00:13:17.928 "data_offset": 0, 00:13:17.928 "data_size": 65536 00:13:17.928 }, 00:13:17.928 { 00:13:17.928 "name": "BaseBdev4", 00:13:17.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.928 "is_configured": false, 00:13:17.928 "data_offset": 0, 00:13:17.928 "data_size": 0 00:13:17.928 } 00:13:17.928 ] 00:13:17.928 }' 00:13:17.928 06:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:17.928 06:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.497 06:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:18.497 [2024-08-14 06:44:45.732907] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:18.497 [2024-08-14 06:44:45.733075] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:18.497 [2024-08-14 06:44:45.733110] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:18.497 [2024-08-14 06:44:45.733593] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:18.497 [2024-08-14 06:44:45.733839] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:18.497 [2024-08-14 06:44:45.733899] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:18.497 [2024-08-14 06:44:45.734226] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.497 BaseBdev4 00:13:18.497 06:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:13:18.497 06:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:13:18.497 06:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:18.757 06:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:18.757 06:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:18.757 06:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:18.757 06:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:18.757 06:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:19.015 [ 00:13:19.015 { 00:13:19.015 "name": "BaseBdev4", 00:13:19.015 "aliases": [ 00:13:19.015 "df32f23f-0545-48a8-b0b3-2c9479e9f0ae" 00:13:19.015 ], 00:13:19.015 "product_name": "Malloc disk", 00:13:19.015 "block_size": 512, 00:13:19.015 "num_blocks": 65536, 00:13:19.015 "uuid": "df32f23f-0545-48a8-b0b3-2c9479e9f0ae", 00:13:19.015 "assigned_rate_limits": { 00:13:19.015 "rw_ios_per_sec": 0, 00:13:19.015 "rw_mbytes_per_sec": 0, 00:13:19.015 "r_mbytes_per_sec": 0, 00:13:19.015 "w_mbytes_per_sec": 0 00:13:19.015 }, 00:13:19.015 "claimed": true, 00:13:19.015 "claim_type": "exclusive_write", 00:13:19.015 "zoned": false, 00:13:19.015 "supported_io_types": { 00:13:19.015 "read": true, 00:13:19.015 "write": true, 00:13:19.015 "unmap": true, 00:13:19.015 "flush": true, 00:13:19.015 "reset": true, 00:13:19.015 "nvme_admin": false, 00:13:19.015 "nvme_io": false, 00:13:19.015 "nvme_io_md": false, 00:13:19.015 "write_zeroes": true, 00:13:19.015 "zcopy": true, 00:13:19.015 "get_zone_info": false, 00:13:19.015 "zone_management": false, 00:13:19.015 "zone_append": false, 00:13:19.015 "compare": false, 00:13:19.015 "compare_and_write": false, 00:13:19.015 "abort": true, 00:13:19.015 "seek_hole": false, 00:13:19.015 "seek_data": false, 00:13:19.015 "copy": true, 00:13:19.015 "nvme_iov_md": false 00:13:19.015 }, 00:13:19.015 "memory_domains": [ 00:13:19.015 { 00:13:19.015 "dma_device_id": "system", 00:13:19.015 "dma_device_type": 1 00:13:19.015 }, 00:13:19.015 { 00:13:19.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.015 "dma_device_type": 2 00:13:19.015 } 00:13:19.015 ], 00:13:19.015 "driver_specific": {} 00:13:19.015 } 00:13:19.015 ] 00:13:19.015 06:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:19.015 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:19.015 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:19.015 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:19.015 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:19.015 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:19.015 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:19.015 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:19.015 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:19.015 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:19.015 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:19.015 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:19.015 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:19.015 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.015 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.274 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:19.274 "name": "Existed_Raid", 00:13:19.274 "uuid": "bed1489e-71f2-49dd-aff4-62f05d75c494", 00:13:19.274 "strip_size_kb": 64, 00:13:19.274 "state": "online", 00:13:19.274 "raid_level": "raid0", 00:13:19.274 "superblock": false, 00:13:19.274 "num_base_bdevs": 4, 00:13:19.274 "num_base_bdevs_discovered": 4, 00:13:19.274 "num_base_bdevs_operational": 4, 00:13:19.274 "base_bdevs_list": [ 00:13:19.274 { 00:13:19.274 "name": "BaseBdev1", 00:13:19.274 "uuid": "a0be4080-9dfe-4113-a1a8-32741389568c", 00:13:19.274 "is_configured": true, 00:13:19.274 "data_offset": 0, 00:13:19.274 "data_size": 65536 00:13:19.274 }, 00:13:19.274 { 00:13:19.274 "name": "BaseBdev2", 00:13:19.275 "uuid": "f558b300-bc96-498d-a5d1-254c6430afac", 00:13:19.275 "is_configured": true, 00:13:19.275 "data_offset": 0, 00:13:19.275 "data_size": 65536 00:13:19.275 }, 00:13:19.275 { 00:13:19.275 "name": "BaseBdev3", 00:13:19.275 "uuid": "393ce69c-bef0-418d-8ba8-253d21bdf9e8", 00:13:19.275 "is_configured": true, 00:13:19.275 "data_offset": 0, 00:13:19.275 "data_size": 65536 00:13:19.275 }, 00:13:19.275 { 00:13:19.275 "name": "BaseBdev4", 00:13:19.275 "uuid": "df32f23f-0545-48a8-b0b3-2c9479e9f0ae", 00:13:19.275 "is_configured": true, 00:13:19.275 "data_offset": 0, 00:13:19.275 "data_size": 65536 00:13:19.275 } 00:13:19.275 ] 00:13:19.275 }' 00:13:19.275 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:19.275 06:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.843 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:19.843 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:19.843 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:19.843 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:19.843 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:19.843 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:19.843 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:19.843 06:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:19.843 [2024-08-14 06:44:47.083053] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.103 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:20.103 "name": "Existed_Raid", 00:13:20.103 "aliases": [ 00:13:20.103 "bed1489e-71f2-49dd-aff4-62f05d75c494" 00:13:20.103 ], 00:13:20.103 "product_name": "Raid Volume", 00:13:20.103 "block_size": 512, 00:13:20.103 "num_blocks": 262144, 00:13:20.103 "uuid": "bed1489e-71f2-49dd-aff4-62f05d75c494", 00:13:20.103 "assigned_rate_limits": { 00:13:20.103 "rw_ios_per_sec": 0, 00:13:20.103 "rw_mbytes_per_sec": 0, 00:13:20.103 "r_mbytes_per_sec": 0, 00:13:20.103 "w_mbytes_per_sec": 0 00:13:20.103 }, 00:13:20.103 "claimed": false, 00:13:20.103 "zoned": false, 00:13:20.103 "supported_io_types": { 00:13:20.103 "read": true, 00:13:20.103 "write": true, 00:13:20.103 "unmap": true, 00:13:20.103 "flush": true, 00:13:20.103 "reset": true, 00:13:20.103 "nvme_admin": false, 00:13:20.103 "nvme_io": false, 00:13:20.103 "nvme_io_md": false, 00:13:20.103 "write_zeroes": true, 00:13:20.103 "zcopy": false, 00:13:20.103 "get_zone_info": false, 00:13:20.103 "zone_management": false, 00:13:20.103 "zone_append": false, 00:13:20.103 "compare": false, 00:13:20.103 "compare_and_write": false, 00:13:20.103 "abort": false, 00:13:20.103 "seek_hole": false, 00:13:20.103 "seek_data": false, 00:13:20.103 "copy": false, 00:13:20.103 "nvme_iov_md": false 00:13:20.103 }, 00:13:20.103 "memory_domains": [ 00:13:20.103 { 00:13:20.103 "dma_device_id": "system", 00:13:20.103 "dma_device_type": 1 00:13:20.103 }, 00:13:20.103 { 00:13:20.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.103 "dma_device_type": 2 00:13:20.103 }, 00:13:20.103 { 00:13:20.103 "dma_device_id": "system", 00:13:20.103 "dma_device_type": 1 00:13:20.103 }, 00:13:20.103 { 00:13:20.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.103 "dma_device_type": 2 00:13:20.103 }, 00:13:20.103 { 00:13:20.103 "dma_device_id": "system", 00:13:20.103 "dma_device_type": 1 00:13:20.103 }, 00:13:20.103 { 00:13:20.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.103 "dma_device_type": 2 00:13:20.103 }, 00:13:20.103 { 00:13:20.103 "dma_device_id": "system", 00:13:20.103 "dma_device_type": 1 00:13:20.103 }, 00:13:20.103 { 00:13:20.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.103 "dma_device_type": 2 00:13:20.103 } 00:13:20.103 ], 00:13:20.103 "driver_specific": { 00:13:20.103 "raid": { 00:13:20.103 "uuid": "bed1489e-71f2-49dd-aff4-62f05d75c494", 00:13:20.103 "strip_size_kb": 64, 00:13:20.103 "state": "online", 00:13:20.103 "raid_level": "raid0", 00:13:20.103 "superblock": false, 00:13:20.103 "num_base_bdevs": 4, 00:13:20.103 "num_base_bdevs_discovered": 4, 00:13:20.103 "num_base_bdevs_operational": 4, 00:13:20.103 "base_bdevs_list": [ 00:13:20.103 { 00:13:20.103 "name": "BaseBdev1", 00:13:20.103 "uuid": "a0be4080-9dfe-4113-a1a8-32741389568c", 00:13:20.103 "is_configured": true, 00:13:20.103 "data_offset": 0, 00:13:20.103 "data_size": 65536 00:13:20.103 }, 00:13:20.103 { 00:13:20.103 "name": "BaseBdev2", 00:13:20.103 "uuid": "f558b300-bc96-498d-a5d1-254c6430afac", 00:13:20.103 "is_configured": true, 00:13:20.103 "data_offset": 0, 00:13:20.103 "data_size": 65536 00:13:20.103 }, 00:13:20.103 { 00:13:20.103 "name": "BaseBdev3", 00:13:20.103 "uuid": "393ce69c-bef0-418d-8ba8-253d21bdf9e8", 00:13:20.104 "is_configured": true, 00:13:20.104 "data_offset": 0, 00:13:20.104 "data_size": 65536 00:13:20.104 }, 00:13:20.104 { 00:13:20.104 "name": "BaseBdev4", 00:13:20.104 "uuid": "df32f23f-0545-48a8-b0b3-2c9479e9f0ae", 00:13:20.104 "is_configured": true, 00:13:20.104 "data_offset": 0, 00:13:20.104 "data_size": 65536 00:13:20.104 } 00:13:20.104 ] 00:13:20.104 } 00:13:20.104 } 00:13:20.104 }' 00:13:20.104 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:20.104 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:20.104 BaseBdev2 00:13:20.104 BaseBdev3 00:13:20.104 BaseBdev4' 00:13:20.104 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:20.104 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:20.104 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:20.104 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:20.104 "name": "BaseBdev1", 00:13:20.104 "aliases": [ 00:13:20.104 "a0be4080-9dfe-4113-a1a8-32741389568c" 00:13:20.104 ], 00:13:20.104 "product_name": "Malloc disk", 00:13:20.104 "block_size": 512, 00:13:20.104 "num_blocks": 65536, 00:13:20.104 "uuid": "a0be4080-9dfe-4113-a1a8-32741389568c", 00:13:20.104 "assigned_rate_limits": { 00:13:20.104 "rw_ios_per_sec": 0, 00:13:20.104 "rw_mbytes_per_sec": 0, 00:13:20.104 "r_mbytes_per_sec": 0, 00:13:20.104 "w_mbytes_per_sec": 0 00:13:20.104 }, 00:13:20.104 "claimed": true, 00:13:20.104 "claim_type": "exclusive_write", 00:13:20.104 "zoned": false, 00:13:20.104 "supported_io_types": { 00:13:20.104 "read": true, 00:13:20.104 "write": true, 00:13:20.104 "unmap": true, 00:13:20.104 "flush": true, 00:13:20.104 "reset": true, 00:13:20.104 "nvme_admin": false, 00:13:20.104 "nvme_io": false, 00:13:20.104 "nvme_io_md": false, 00:13:20.104 "write_zeroes": true, 00:13:20.104 "zcopy": true, 00:13:20.104 "get_zone_info": false, 00:13:20.104 "zone_management": false, 00:13:20.104 "zone_append": false, 00:13:20.104 "compare": false, 00:13:20.104 "compare_and_write": false, 00:13:20.104 "abort": true, 00:13:20.104 "seek_hole": false, 00:13:20.104 "seek_data": false, 00:13:20.104 "copy": true, 00:13:20.104 "nvme_iov_md": false 00:13:20.104 }, 00:13:20.104 "memory_domains": [ 00:13:20.104 { 00:13:20.104 "dma_device_id": "system", 00:13:20.104 "dma_device_type": 1 00:13:20.104 }, 00:13:20.104 { 00:13:20.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.104 "dma_device_type": 2 00:13:20.104 } 00:13:20.104 ], 00:13:20.104 "driver_specific": {} 00:13:20.104 }' 00:13:20.104 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.364 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.364 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:20.364 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.364 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.364 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:20.364 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.364 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.364 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:20.364 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.622 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.622 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:20.622 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:20.622 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:20.622 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:20.622 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:20.622 "name": "BaseBdev2", 00:13:20.622 "aliases": [ 00:13:20.622 "f558b300-bc96-498d-a5d1-254c6430afac" 00:13:20.622 ], 00:13:20.622 "product_name": "Malloc disk", 00:13:20.622 "block_size": 512, 00:13:20.622 "num_blocks": 65536, 00:13:20.622 "uuid": "f558b300-bc96-498d-a5d1-254c6430afac", 00:13:20.622 "assigned_rate_limits": { 00:13:20.622 "rw_ios_per_sec": 0, 00:13:20.622 "rw_mbytes_per_sec": 0, 00:13:20.622 "r_mbytes_per_sec": 0, 00:13:20.622 "w_mbytes_per_sec": 0 00:13:20.622 }, 00:13:20.622 "claimed": true, 00:13:20.622 "claim_type": "exclusive_write", 00:13:20.622 "zoned": false, 00:13:20.622 "supported_io_types": { 00:13:20.622 "read": true, 00:13:20.622 "write": true, 00:13:20.622 "unmap": true, 00:13:20.622 "flush": true, 00:13:20.622 "reset": true, 00:13:20.622 "nvme_admin": false, 00:13:20.622 "nvme_io": false, 00:13:20.622 "nvme_io_md": false, 00:13:20.622 "write_zeroes": true, 00:13:20.622 "zcopy": true, 00:13:20.622 "get_zone_info": false, 00:13:20.622 "zone_management": false, 00:13:20.622 "zone_append": false, 00:13:20.623 "compare": false, 00:13:20.623 "compare_and_write": false, 00:13:20.623 "abort": true, 00:13:20.623 "seek_hole": false, 00:13:20.623 "seek_data": false, 00:13:20.623 "copy": true, 00:13:20.623 "nvme_iov_md": false 00:13:20.623 }, 00:13:20.623 "memory_domains": [ 00:13:20.623 { 00:13:20.623 "dma_device_id": "system", 00:13:20.623 "dma_device_type": 1 00:13:20.623 }, 00:13:20.623 { 00:13:20.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.623 "dma_device_type": 2 00:13:20.623 } 00:13:20.623 ], 00:13:20.623 "driver_specific": {} 00:13:20.623 }' 00:13:20.623 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.882 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.882 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:20.882 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.882 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.882 06:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:20.882 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.882 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.882 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:20.882 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.142 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.142 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:21.142 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:21.142 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:21.142 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:21.142 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:21.142 "name": "BaseBdev3", 00:13:21.142 "aliases": [ 00:13:21.142 "393ce69c-bef0-418d-8ba8-253d21bdf9e8" 00:13:21.142 ], 00:13:21.142 "product_name": "Malloc disk", 00:13:21.142 "block_size": 512, 00:13:21.142 "num_blocks": 65536, 00:13:21.142 "uuid": "393ce69c-bef0-418d-8ba8-253d21bdf9e8", 00:13:21.142 "assigned_rate_limits": { 00:13:21.142 "rw_ios_per_sec": 0, 00:13:21.142 "rw_mbytes_per_sec": 0, 00:13:21.142 "r_mbytes_per_sec": 0, 00:13:21.142 "w_mbytes_per_sec": 0 00:13:21.142 }, 00:13:21.142 "claimed": true, 00:13:21.142 "claim_type": "exclusive_write", 00:13:21.142 "zoned": false, 00:13:21.142 "supported_io_types": { 00:13:21.142 "read": true, 00:13:21.142 "write": true, 00:13:21.142 "unmap": true, 00:13:21.142 "flush": true, 00:13:21.142 "reset": true, 00:13:21.142 "nvme_admin": false, 00:13:21.142 "nvme_io": false, 00:13:21.142 "nvme_io_md": false, 00:13:21.142 "write_zeroes": true, 00:13:21.142 "zcopy": true, 00:13:21.142 "get_zone_info": false, 00:13:21.142 "zone_management": false, 00:13:21.142 "zone_append": false, 00:13:21.142 "compare": false, 00:13:21.142 "compare_and_write": false, 00:13:21.142 "abort": true, 00:13:21.142 "seek_hole": false, 00:13:21.142 "seek_data": false, 00:13:21.142 "copy": true, 00:13:21.142 "nvme_iov_md": false 00:13:21.142 }, 00:13:21.142 "memory_domains": [ 00:13:21.142 { 00:13:21.142 "dma_device_id": "system", 00:13:21.142 "dma_device_type": 1 00:13:21.142 }, 00:13:21.142 { 00:13:21.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.142 "dma_device_type": 2 00:13:21.142 } 00:13:21.142 ], 00:13:21.142 "driver_specific": {} 00:13:21.142 }' 00:13:21.142 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.402 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.402 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:21.402 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.402 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.402 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:21.402 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:21.402 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:21.402 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:21.402 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.662 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.662 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:21.662 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:21.662 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:21.662 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:21.662 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:21.662 "name": "BaseBdev4", 00:13:21.662 "aliases": [ 00:13:21.662 "df32f23f-0545-48a8-b0b3-2c9479e9f0ae" 00:13:21.662 ], 00:13:21.662 "product_name": "Malloc disk", 00:13:21.662 "block_size": 512, 00:13:21.662 "num_blocks": 65536, 00:13:21.662 "uuid": "df32f23f-0545-48a8-b0b3-2c9479e9f0ae", 00:13:21.662 "assigned_rate_limits": { 00:13:21.662 "rw_ios_per_sec": 0, 00:13:21.662 "rw_mbytes_per_sec": 0, 00:13:21.662 "r_mbytes_per_sec": 0, 00:13:21.662 "w_mbytes_per_sec": 0 00:13:21.662 }, 00:13:21.662 "claimed": true, 00:13:21.662 "claim_type": "exclusive_write", 00:13:21.662 "zoned": false, 00:13:21.662 "supported_io_types": { 00:13:21.662 "read": true, 00:13:21.662 "write": true, 00:13:21.662 "unmap": true, 00:13:21.662 "flush": true, 00:13:21.662 "reset": true, 00:13:21.662 "nvme_admin": false, 00:13:21.662 "nvme_io": false, 00:13:21.662 "nvme_io_md": false, 00:13:21.662 "write_zeroes": true, 00:13:21.662 "zcopy": true, 00:13:21.662 "get_zone_info": false, 00:13:21.662 "zone_management": false, 00:13:21.662 "zone_append": false, 00:13:21.662 "compare": false, 00:13:21.662 "compare_and_write": false, 00:13:21.662 "abort": true, 00:13:21.662 "seek_hole": false, 00:13:21.662 "seek_data": false, 00:13:21.662 "copy": true, 00:13:21.662 "nvme_iov_md": false 00:13:21.662 }, 00:13:21.662 "memory_domains": [ 00:13:21.662 { 00:13:21.662 "dma_device_id": "system", 00:13:21.662 "dma_device_type": 1 00:13:21.662 }, 00:13:21.662 { 00:13:21.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.662 "dma_device_type": 2 00:13:21.662 } 00:13:21.662 ], 00:13:21.662 "driver_specific": {} 00:13:21.662 }' 00:13:21.662 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.922 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.922 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:21.922 06:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.922 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.922 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:21.922 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:21.922 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:22.182 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:22.182 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.182 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.182 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:22.182 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:22.442 [2024-08-14 06:44:49.438947] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:22.442 [2024-08-14 06:44:49.439122] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.442 [2024-08-14 06:44:49.439272] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:22.442 "name": "Existed_Raid", 00:13:22.442 "uuid": "bed1489e-71f2-49dd-aff4-62f05d75c494", 00:13:22.442 "strip_size_kb": 64, 00:13:22.442 "state": "offline", 00:13:22.442 "raid_level": "raid0", 00:13:22.442 "superblock": false, 00:13:22.442 "num_base_bdevs": 4, 00:13:22.442 "num_base_bdevs_discovered": 3, 00:13:22.442 "num_base_bdevs_operational": 3, 00:13:22.442 "base_bdevs_list": [ 00:13:22.442 { 00:13:22.442 "name": null, 00:13:22.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.442 "is_configured": false, 00:13:22.442 "data_offset": 0, 00:13:22.442 "data_size": 65536 00:13:22.442 }, 00:13:22.442 { 00:13:22.442 "name": "BaseBdev2", 00:13:22.442 "uuid": "f558b300-bc96-498d-a5d1-254c6430afac", 00:13:22.442 "is_configured": true, 00:13:22.442 "data_offset": 0, 00:13:22.442 "data_size": 65536 00:13:22.442 }, 00:13:22.442 { 00:13:22.442 "name": "BaseBdev3", 00:13:22.442 "uuid": "393ce69c-bef0-418d-8ba8-253d21bdf9e8", 00:13:22.442 "is_configured": true, 00:13:22.442 "data_offset": 0, 00:13:22.442 "data_size": 65536 00:13:22.442 }, 00:13:22.442 { 00:13:22.442 "name": "BaseBdev4", 00:13:22.442 "uuid": "df32f23f-0545-48a8-b0b3-2c9479e9f0ae", 00:13:22.442 "is_configured": true, 00:13:22.442 "data_offset": 0, 00:13:22.442 "data_size": 65536 00:13:22.442 } 00:13:22.442 ] 00:13:22.442 }' 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:22.442 06:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.011 06:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:23.011 06:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:23.011 06:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:23.011 06:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.270 06:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:23.270 06:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:23.270 06:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:23.529 [2024-08-14 06:44:50.546314] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:23.529 06:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:23.529 06:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:23.529 06:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.529 06:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:23.788 06:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:23.788 06:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:23.788 06:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:23.788 [2024-08-14 06:44:50.966321] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:23.788 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:23.788 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:23.788 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.788 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:24.047 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:24.047 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:24.047 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:24.306 [2024-08-14 06:44:51.378192] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:24.306 [2024-08-14 06:44:51.378390] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:24.306 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:24.306 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:24.306 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:24.306 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.566 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:24.566 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:24.566 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:13:24.566 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:24.566 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:24.566 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:24.825 BaseBdev2 00:13:24.825 06:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:24.825 06:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:24.825 06:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:24.825 06:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:24.825 06:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:24.825 06:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:24.825 06:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:24.825 06:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:25.084 [ 00:13:25.084 { 00:13:25.084 "name": "BaseBdev2", 00:13:25.084 "aliases": [ 00:13:25.084 "15b03dff-9321-49a3-8e9c-1f375349e561" 00:13:25.084 ], 00:13:25.084 "product_name": "Malloc disk", 00:13:25.084 "block_size": 512, 00:13:25.084 "num_blocks": 65536, 00:13:25.084 "uuid": "15b03dff-9321-49a3-8e9c-1f375349e561", 00:13:25.084 "assigned_rate_limits": { 00:13:25.084 "rw_ios_per_sec": 0, 00:13:25.084 "rw_mbytes_per_sec": 0, 00:13:25.084 "r_mbytes_per_sec": 0, 00:13:25.084 "w_mbytes_per_sec": 0 00:13:25.084 }, 00:13:25.084 "claimed": false, 00:13:25.084 "zoned": false, 00:13:25.084 "supported_io_types": { 00:13:25.084 "read": true, 00:13:25.084 "write": true, 00:13:25.084 "unmap": true, 00:13:25.084 "flush": true, 00:13:25.084 "reset": true, 00:13:25.084 "nvme_admin": false, 00:13:25.084 "nvme_io": false, 00:13:25.084 "nvme_io_md": false, 00:13:25.084 "write_zeroes": true, 00:13:25.084 "zcopy": true, 00:13:25.084 "get_zone_info": false, 00:13:25.084 "zone_management": false, 00:13:25.084 "zone_append": false, 00:13:25.084 "compare": false, 00:13:25.084 "compare_and_write": false, 00:13:25.084 "abort": true, 00:13:25.084 "seek_hole": false, 00:13:25.084 "seek_data": false, 00:13:25.084 "copy": true, 00:13:25.084 "nvme_iov_md": false 00:13:25.084 }, 00:13:25.084 "memory_domains": [ 00:13:25.084 { 00:13:25.084 "dma_device_id": "system", 00:13:25.084 "dma_device_type": 1 00:13:25.084 }, 00:13:25.084 { 00:13:25.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.084 "dma_device_type": 2 00:13:25.084 } 00:13:25.084 ], 00:13:25.084 "driver_specific": {} 00:13:25.084 } 00:13:25.084 ] 00:13:25.084 06:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:25.084 06:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:25.084 06:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:25.084 06:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:25.343 BaseBdev3 00:13:25.343 06:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:25.343 06:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:13:25.343 06:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:25.343 06:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:25.343 06:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:25.343 06:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:25.343 06:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:25.602 06:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:25.602 [ 00:13:25.602 { 00:13:25.602 "name": "BaseBdev3", 00:13:25.602 "aliases": [ 00:13:25.602 "226d478d-8d93-4579-bbcf-319a3cdb6aaa" 00:13:25.602 ], 00:13:25.602 "product_name": "Malloc disk", 00:13:25.602 "block_size": 512, 00:13:25.602 "num_blocks": 65536, 00:13:25.602 "uuid": "226d478d-8d93-4579-bbcf-319a3cdb6aaa", 00:13:25.602 "assigned_rate_limits": { 00:13:25.602 "rw_ios_per_sec": 0, 00:13:25.602 "rw_mbytes_per_sec": 0, 00:13:25.602 "r_mbytes_per_sec": 0, 00:13:25.602 "w_mbytes_per_sec": 0 00:13:25.602 }, 00:13:25.602 "claimed": false, 00:13:25.602 "zoned": false, 00:13:25.602 "supported_io_types": { 00:13:25.602 "read": true, 00:13:25.602 "write": true, 00:13:25.602 "unmap": true, 00:13:25.602 "flush": true, 00:13:25.602 "reset": true, 00:13:25.602 "nvme_admin": false, 00:13:25.602 "nvme_io": false, 00:13:25.602 "nvme_io_md": false, 00:13:25.602 "write_zeroes": true, 00:13:25.602 "zcopy": true, 00:13:25.602 "get_zone_info": false, 00:13:25.602 "zone_management": false, 00:13:25.602 "zone_append": false, 00:13:25.602 "compare": false, 00:13:25.602 "compare_and_write": false, 00:13:25.602 "abort": true, 00:13:25.602 "seek_hole": false, 00:13:25.602 "seek_data": false, 00:13:25.602 "copy": true, 00:13:25.602 "nvme_iov_md": false 00:13:25.602 }, 00:13:25.602 "memory_domains": [ 00:13:25.602 { 00:13:25.602 "dma_device_id": "system", 00:13:25.602 "dma_device_type": 1 00:13:25.602 }, 00:13:25.602 { 00:13:25.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.602 "dma_device_type": 2 00:13:25.602 } 00:13:25.602 ], 00:13:25.602 "driver_specific": {} 00:13:25.602 } 00:13:25.602 ] 00:13:25.602 06:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:25.602 06:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:25.602 06:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:25.602 06:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:25.861 BaseBdev4 00:13:25.861 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:13:25.861 06:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:13:25.861 06:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:25.861 06:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:25.861 06:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:25.861 06:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:25.861 06:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:26.120 06:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:26.120 [ 00:13:26.120 { 00:13:26.120 "name": "BaseBdev4", 00:13:26.120 "aliases": [ 00:13:26.120 "6435b772-73cd-43c1-8d9b-d0ef4042091e" 00:13:26.120 ], 00:13:26.120 "product_name": "Malloc disk", 00:13:26.120 "block_size": 512, 00:13:26.120 "num_blocks": 65536, 00:13:26.120 "uuid": "6435b772-73cd-43c1-8d9b-d0ef4042091e", 00:13:26.120 "assigned_rate_limits": { 00:13:26.120 "rw_ios_per_sec": 0, 00:13:26.120 "rw_mbytes_per_sec": 0, 00:13:26.120 "r_mbytes_per_sec": 0, 00:13:26.120 "w_mbytes_per_sec": 0 00:13:26.120 }, 00:13:26.120 "claimed": false, 00:13:26.120 "zoned": false, 00:13:26.120 "supported_io_types": { 00:13:26.120 "read": true, 00:13:26.120 "write": true, 00:13:26.120 "unmap": true, 00:13:26.120 "flush": true, 00:13:26.120 "reset": true, 00:13:26.120 "nvme_admin": false, 00:13:26.120 "nvme_io": false, 00:13:26.120 "nvme_io_md": false, 00:13:26.120 "write_zeroes": true, 00:13:26.120 "zcopy": true, 00:13:26.120 "get_zone_info": false, 00:13:26.120 "zone_management": false, 00:13:26.120 "zone_append": false, 00:13:26.120 "compare": false, 00:13:26.120 "compare_and_write": false, 00:13:26.120 "abort": true, 00:13:26.120 "seek_hole": false, 00:13:26.120 "seek_data": false, 00:13:26.120 "copy": true, 00:13:26.120 "nvme_iov_md": false 00:13:26.120 }, 00:13:26.120 "memory_domains": [ 00:13:26.120 { 00:13:26.120 "dma_device_id": "system", 00:13:26.120 "dma_device_type": 1 00:13:26.120 }, 00:13:26.120 { 00:13:26.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.120 "dma_device_type": 2 00:13:26.120 } 00:13:26.120 ], 00:13:26.120 "driver_specific": {} 00:13:26.120 } 00:13:26.120 ] 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:26.379 [2024-08-14 06:44:53.540978] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.379 [2024-08-14 06:44:53.541070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.379 [2024-08-14 06:44:53.541100] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.379 [2024-08-14 06:44:53.543274] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.379 [2024-08-14 06:44:53.543422] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.379 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.645 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:26.645 "name": "Existed_Raid", 00:13:26.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.645 "strip_size_kb": 64, 00:13:26.645 "state": "configuring", 00:13:26.645 "raid_level": "raid0", 00:13:26.645 "superblock": false, 00:13:26.645 "num_base_bdevs": 4, 00:13:26.645 "num_base_bdevs_discovered": 3, 00:13:26.645 "num_base_bdevs_operational": 4, 00:13:26.645 "base_bdevs_list": [ 00:13:26.645 { 00:13:26.645 "name": "BaseBdev1", 00:13:26.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.645 "is_configured": false, 00:13:26.645 "data_offset": 0, 00:13:26.645 "data_size": 0 00:13:26.645 }, 00:13:26.645 { 00:13:26.645 "name": "BaseBdev2", 00:13:26.645 "uuid": "15b03dff-9321-49a3-8e9c-1f375349e561", 00:13:26.645 "is_configured": true, 00:13:26.645 "data_offset": 0, 00:13:26.645 "data_size": 65536 00:13:26.645 }, 00:13:26.645 { 00:13:26.645 "name": "BaseBdev3", 00:13:26.645 "uuid": "226d478d-8d93-4579-bbcf-319a3cdb6aaa", 00:13:26.645 "is_configured": true, 00:13:26.645 "data_offset": 0, 00:13:26.645 "data_size": 65536 00:13:26.645 }, 00:13:26.645 { 00:13:26.645 "name": "BaseBdev4", 00:13:26.645 "uuid": "6435b772-73cd-43c1-8d9b-d0ef4042091e", 00:13:26.645 "is_configured": true, 00:13:26.645 "data_offset": 0, 00:13:26.645 "data_size": 65536 00:13:26.645 } 00:13:26.645 ] 00:13:26.645 }' 00:13:26.645 06:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:26.645 06:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.215 06:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:27.474 [2024-08-14 06:44:54.531264] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:27.474 06:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:27.474 06:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:27.475 06:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:27.475 06:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:27.475 06:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:27.475 06:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:27.475 06:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:27.475 06:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:27.475 06:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:27.475 06:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:27.475 06:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.475 06:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.734 06:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:27.734 "name": "Existed_Raid", 00:13:27.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.734 "strip_size_kb": 64, 00:13:27.734 "state": "configuring", 00:13:27.734 "raid_level": "raid0", 00:13:27.734 "superblock": false, 00:13:27.734 "num_base_bdevs": 4, 00:13:27.734 "num_base_bdevs_discovered": 2, 00:13:27.734 "num_base_bdevs_operational": 4, 00:13:27.734 "base_bdevs_list": [ 00:13:27.734 { 00:13:27.734 "name": "BaseBdev1", 00:13:27.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.734 "is_configured": false, 00:13:27.734 "data_offset": 0, 00:13:27.734 "data_size": 0 00:13:27.734 }, 00:13:27.734 { 00:13:27.734 "name": null, 00:13:27.734 "uuid": "15b03dff-9321-49a3-8e9c-1f375349e561", 00:13:27.734 "is_configured": false, 00:13:27.734 "data_offset": 0, 00:13:27.734 "data_size": 65536 00:13:27.734 }, 00:13:27.734 { 00:13:27.734 "name": "BaseBdev3", 00:13:27.734 "uuid": "226d478d-8d93-4579-bbcf-319a3cdb6aaa", 00:13:27.734 "is_configured": true, 00:13:27.734 "data_offset": 0, 00:13:27.734 "data_size": 65536 00:13:27.734 }, 00:13:27.734 { 00:13:27.734 "name": "BaseBdev4", 00:13:27.734 "uuid": "6435b772-73cd-43c1-8d9b-d0ef4042091e", 00:13:27.734 "is_configured": true, 00:13:27.734 "data_offset": 0, 00:13:27.734 "data_size": 65536 00:13:27.734 } 00:13:27.734 ] 00:13:27.734 }' 00:13:27.734 06:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:27.734 06:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.302 06:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.302 06:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:28.302 06:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:28.302 06:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:28.561 [2024-08-14 06:44:55.690037] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.562 BaseBdev1 00:13:28.562 06:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:28.562 06:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:28.562 06:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:28.562 06:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:28.562 06:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:28.562 06:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:28.562 06:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:28.821 06:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:28.821 [ 00:13:28.821 { 00:13:28.821 "name": "BaseBdev1", 00:13:28.821 "aliases": [ 00:13:28.821 "fd27bf03-79cf-42a9-b06f-315fc1a0110a" 00:13:28.821 ], 00:13:28.821 "product_name": "Malloc disk", 00:13:28.821 "block_size": 512, 00:13:28.821 "num_blocks": 65536, 00:13:28.821 "uuid": "fd27bf03-79cf-42a9-b06f-315fc1a0110a", 00:13:28.821 "assigned_rate_limits": { 00:13:28.821 "rw_ios_per_sec": 0, 00:13:28.821 "rw_mbytes_per_sec": 0, 00:13:28.821 "r_mbytes_per_sec": 0, 00:13:28.821 "w_mbytes_per_sec": 0 00:13:28.821 }, 00:13:28.821 "claimed": true, 00:13:28.821 "claim_type": "exclusive_write", 00:13:28.821 "zoned": false, 00:13:28.821 "supported_io_types": { 00:13:28.821 "read": true, 00:13:28.821 "write": true, 00:13:28.821 "unmap": true, 00:13:28.821 "flush": true, 00:13:28.821 "reset": true, 00:13:28.821 "nvme_admin": false, 00:13:28.821 "nvme_io": false, 00:13:28.821 "nvme_io_md": false, 00:13:28.821 "write_zeroes": true, 00:13:28.821 "zcopy": true, 00:13:28.821 "get_zone_info": false, 00:13:28.821 "zone_management": false, 00:13:28.821 "zone_append": false, 00:13:28.821 "compare": false, 00:13:28.821 "compare_and_write": false, 00:13:28.821 "abort": true, 00:13:28.821 "seek_hole": false, 00:13:28.821 "seek_data": false, 00:13:28.821 "copy": true, 00:13:28.821 "nvme_iov_md": false 00:13:28.821 }, 00:13:28.821 "memory_domains": [ 00:13:28.821 { 00:13:28.821 "dma_device_id": "system", 00:13:28.821 "dma_device_type": 1 00:13:28.821 }, 00:13:28.821 { 00:13:28.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.821 "dma_device_type": 2 00:13:28.821 } 00:13:28.821 ], 00:13:28.821 "driver_specific": {} 00:13:28.821 } 00:13:28.821 ] 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:29.081 "name": "Existed_Raid", 00:13:29.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.081 "strip_size_kb": 64, 00:13:29.081 "state": "configuring", 00:13:29.081 "raid_level": "raid0", 00:13:29.081 "superblock": false, 00:13:29.081 "num_base_bdevs": 4, 00:13:29.081 "num_base_bdevs_discovered": 3, 00:13:29.081 "num_base_bdevs_operational": 4, 00:13:29.081 "base_bdevs_list": [ 00:13:29.081 { 00:13:29.081 "name": "BaseBdev1", 00:13:29.081 "uuid": "fd27bf03-79cf-42a9-b06f-315fc1a0110a", 00:13:29.081 "is_configured": true, 00:13:29.081 "data_offset": 0, 00:13:29.081 "data_size": 65536 00:13:29.081 }, 00:13:29.081 { 00:13:29.081 "name": null, 00:13:29.081 "uuid": "15b03dff-9321-49a3-8e9c-1f375349e561", 00:13:29.081 "is_configured": false, 00:13:29.081 "data_offset": 0, 00:13:29.081 "data_size": 65536 00:13:29.081 }, 00:13:29.081 { 00:13:29.081 "name": "BaseBdev3", 00:13:29.081 "uuid": "226d478d-8d93-4579-bbcf-319a3cdb6aaa", 00:13:29.081 "is_configured": true, 00:13:29.081 "data_offset": 0, 00:13:29.081 "data_size": 65536 00:13:29.081 }, 00:13:29.081 { 00:13:29.081 "name": "BaseBdev4", 00:13:29.081 "uuid": "6435b772-73cd-43c1-8d9b-d0ef4042091e", 00:13:29.081 "is_configured": true, 00:13:29.081 "data_offset": 0, 00:13:29.081 "data_size": 65536 00:13:29.081 } 00:13:29.081 ] 00:13:29.081 }' 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:29.081 06:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.652 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.652 06:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:29.935 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:29.935 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:30.206 [2024-08-14 06:44:57.219735] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:30.206 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:30.206 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:30.206 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:30.206 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:30.206 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:30.206 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:30.206 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:30.206 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:30.206 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:30.206 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:30.206 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.206 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.206 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:30.206 "name": "Existed_Raid", 00:13:30.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.206 "strip_size_kb": 64, 00:13:30.206 "state": "configuring", 00:13:30.206 "raid_level": "raid0", 00:13:30.206 "superblock": false, 00:13:30.206 "num_base_bdevs": 4, 00:13:30.206 "num_base_bdevs_discovered": 2, 00:13:30.206 "num_base_bdevs_operational": 4, 00:13:30.206 "base_bdevs_list": [ 00:13:30.206 { 00:13:30.206 "name": "BaseBdev1", 00:13:30.206 "uuid": "fd27bf03-79cf-42a9-b06f-315fc1a0110a", 00:13:30.206 "is_configured": true, 00:13:30.206 "data_offset": 0, 00:13:30.206 "data_size": 65536 00:13:30.206 }, 00:13:30.206 { 00:13:30.206 "name": null, 00:13:30.206 "uuid": "15b03dff-9321-49a3-8e9c-1f375349e561", 00:13:30.206 "is_configured": false, 00:13:30.206 "data_offset": 0, 00:13:30.206 "data_size": 65536 00:13:30.206 }, 00:13:30.206 { 00:13:30.206 "name": null, 00:13:30.206 "uuid": "226d478d-8d93-4579-bbcf-319a3cdb6aaa", 00:13:30.206 "is_configured": false, 00:13:30.206 "data_offset": 0, 00:13:30.206 "data_size": 65536 00:13:30.206 }, 00:13:30.206 { 00:13:30.206 "name": "BaseBdev4", 00:13:30.206 "uuid": "6435b772-73cd-43c1-8d9b-d0ef4042091e", 00:13:30.206 "is_configured": true, 00:13:30.206 "data_offset": 0, 00:13:30.206 "data_size": 65536 00:13:30.206 } 00:13:30.206 ] 00:13:30.206 }' 00:13:30.206 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:30.206 06:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.775 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:30.775 06:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.034 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:31.034 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:31.295 [2024-08-14 06:44:58.361867] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:31.295 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:31.295 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:31.295 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:31.295 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:31.295 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:31.295 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:31.295 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:31.295 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:31.295 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:31.295 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:31.295 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.295 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.555 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:31.555 "name": "Existed_Raid", 00:13:31.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.555 "strip_size_kb": 64, 00:13:31.555 "state": "configuring", 00:13:31.555 "raid_level": "raid0", 00:13:31.555 "superblock": false, 00:13:31.555 "num_base_bdevs": 4, 00:13:31.555 "num_base_bdevs_discovered": 3, 00:13:31.555 "num_base_bdevs_operational": 4, 00:13:31.555 "base_bdevs_list": [ 00:13:31.555 { 00:13:31.555 "name": "BaseBdev1", 00:13:31.555 "uuid": "fd27bf03-79cf-42a9-b06f-315fc1a0110a", 00:13:31.555 "is_configured": true, 00:13:31.555 "data_offset": 0, 00:13:31.555 "data_size": 65536 00:13:31.555 }, 00:13:31.555 { 00:13:31.555 "name": null, 00:13:31.555 "uuid": "15b03dff-9321-49a3-8e9c-1f375349e561", 00:13:31.555 "is_configured": false, 00:13:31.555 "data_offset": 0, 00:13:31.555 "data_size": 65536 00:13:31.555 }, 00:13:31.555 { 00:13:31.555 "name": "BaseBdev3", 00:13:31.555 "uuid": "226d478d-8d93-4579-bbcf-319a3cdb6aaa", 00:13:31.555 "is_configured": true, 00:13:31.555 "data_offset": 0, 00:13:31.555 "data_size": 65536 00:13:31.555 }, 00:13:31.555 { 00:13:31.555 "name": "BaseBdev4", 00:13:31.555 "uuid": "6435b772-73cd-43c1-8d9b-d0ef4042091e", 00:13:31.555 "is_configured": true, 00:13:31.555 "data_offset": 0, 00:13:31.555 "data_size": 65536 00:13:31.555 } 00:13:31.555 ] 00:13:31.555 }' 00:13:31.555 06:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:31.555 06:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.124 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.124 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:32.124 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:32.124 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:32.383 [2024-08-14 06:44:59.527858] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:32.383 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:32.383 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:32.383 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:32.383 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:32.383 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:32.383 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:32.383 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:32.383 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:32.383 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:32.383 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:32.383 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.383 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.643 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:32.643 "name": "Existed_Raid", 00:13:32.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.643 "strip_size_kb": 64, 00:13:32.643 "state": "configuring", 00:13:32.643 "raid_level": "raid0", 00:13:32.643 "superblock": false, 00:13:32.643 "num_base_bdevs": 4, 00:13:32.643 "num_base_bdevs_discovered": 2, 00:13:32.643 "num_base_bdevs_operational": 4, 00:13:32.643 "base_bdevs_list": [ 00:13:32.643 { 00:13:32.643 "name": null, 00:13:32.643 "uuid": "fd27bf03-79cf-42a9-b06f-315fc1a0110a", 00:13:32.643 "is_configured": false, 00:13:32.643 "data_offset": 0, 00:13:32.643 "data_size": 65536 00:13:32.643 }, 00:13:32.643 { 00:13:32.643 "name": null, 00:13:32.643 "uuid": "15b03dff-9321-49a3-8e9c-1f375349e561", 00:13:32.643 "is_configured": false, 00:13:32.643 "data_offset": 0, 00:13:32.643 "data_size": 65536 00:13:32.643 }, 00:13:32.643 { 00:13:32.643 "name": "BaseBdev3", 00:13:32.643 "uuid": "226d478d-8d93-4579-bbcf-319a3cdb6aaa", 00:13:32.643 "is_configured": true, 00:13:32.643 "data_offset": 0, 00:13:32.643 "data_size": 65536 00:13:32.643 }, 00:13:32.643 { 00:13:32.643 "name": "BaseBdev4", 00:13:32.643 "uuid": "6435b772-73cd-43c1-8d9b-d0ef4042091e", 00:13:32.643 "is_configured": true, 00:13:32.643 "data_offset": 0, 00:13:32.643 "data_size": 65536 00:13:32.643 } 00:13:32.643 ] 00:13:32.643 }' 00:13:32.643 06:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:32.643 06:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.216 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.216 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:33.476 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:33.476 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:33.476 [2024-08-14 06:45:00.662107] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.476 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:33.476 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:33.476 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:33.476 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:33.476 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:33.476 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:33.476 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:33.476 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:33.476 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:33.476 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:33.476 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.476 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.735 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:33.735 "name": "Existed_Raid", 00:13:33.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.735 "strip_size_kb": 64, 00:13:33.735 "state": "configuring", 00:13:33.735 "raid_level": "raid0", 00:13:33.735 "superblock": false, 00:13:33.735 "num_base_bdevs": 4, 00:13:33.735 "num_base_bdevs_discovered": 3, 00:13:33.735 "num_base_bdevs_operational": 4, 00:13:33.735 "base_bdevs_list": [ 00:13:33.735 { 00:13:33.735 "name": null, 00:13:33.735 "uuid": "fd27bf03-79cf-42a9-b06f-315fc1a0110a", 00:13:33.735 "is_configured": false, 00:13:33.735 "data_offset": 0, 00:13:33.735 "data_size": 65536 00:13:33.735 }, 00:13:33.735 { 00:13:33.735 "name": "BaseBdev2", 00:13:33.735 "uuid": "15b03dff-9321-49a3-8e9c-1f375349e561", 00:13:33.735 "is_configured": true, 00:13:33.735 "data_offset": 0, 00:13:33.735 "data_size": 65536 00:13:33.735 }, 00:13:33.735 { 00:13:33.735 "name": "BaseBdev3", 00:13:33.735 "uuid": "226d478d-8d93-4579-bbcf-319a3cdb6aaa", 00:13:33.735 "is_configured": true, 00:13:33.735 "data_offset": 0, 00:13:33.735 "data_size": 65536 00:13:33.735 }, 00:13:33.735 { 00:13:33.735 "name": "BaseBdev4", 00:13:33.735 "uuid": "6435b772-73cd-43c1-8d9b-d0ef4042091e", 00:13:33.735 "is_configured": true, 00:13:33.735 "data_offset": 0, 00:13:33.735 "data_size": 65536 00:13:33.735 } 00:13:33.735 ] 00:13:33.735 }' 00:13:33.735 06:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:33.735 06:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.305 06:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.305 06:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:34.564 06:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:34.564 06:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.564 06:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:34.824 06:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u fd27bf03-79cf-42a9-b06f-315fc1a0110a 00:13:34.824 [2024-08-14 06:45:02.012658] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:34.824 [2024-08-14 06:45:02.012853] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:34.824 [2024-08-14 06:45:02.012889] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:34.824 [2024-08-14 06:45:02.013257] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:34.824 [2024-08-14 06:45:02.013443] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:34.824 [2024-08-14 06:45:02.013482] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:34.824 [2024-08-14 06:45:02.013728] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.824 NewBaseBdev 00:13:34.824 06:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:34.824 06:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:13:34.824 06:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:34.824 06:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:34.824 06:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:34.824 06:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:34.824 06:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:35.083 06:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:35.342 [ 00:13:35.342 { 00:13:35.342 "name": "NewBaseBdev", 00:13:35.342 "aliases": [ 00:13:35.342 "fd27bf03-79cf-42a9-b06f-315fc1a0110a" 00:13:35.342 ], 00:13:35.342 "product_name": "Malloc disk", 00:13:35.342 "block_size": 512, 00:13:35.342 "num_blocks": 65536, 00:13:35.342 "uuid": "fd27bf03-79cf-42a9-b06f-315fc1a0110a", 00:13:35.342 "assigned_rate_limits": { 00:13:35.342 "rw_ios_per_sec": 0, 00:13:35.342 "rw_mbytes_per_sec": 0, 00:13:35.342 "r_mbytes_per_sec": 0, 00:13:35.342 "w_mbytes_per_sec": 0 00:13:35.342 }, 00:13:35.342 "claimed": true, 00:13:35.342 "claim_type": "exclusive_write", 00:13:35.342 "zoned": false, 00:13:35.342 "supported_io_types": { 00:13:35.342 "read": true, 00:13:35.342 "write": true, 00:13:35.342 "unmap": true, 00:13:35.342 "flush": true, 00:13:35.342 "reset": true, 00:13:35.342 "nvme_admin": false, 00:13:35.342 "nvme_io": false, 00:13:35.342 "nvme_io_md": false, 00:13:35.342 "write_zeroes": true, 00:13:35.342 "zcopy": true, 00:13:35.342 "get_zone_info": false, 00:13:35.342 "zone_management": false, 00:13:35.342 "zone_append": false, 00:13:35.342 "compare": false, 00:13:35.342 "compare_and_write": false, 00:13:35.342 "abort": true, 00:13:35.342 "seek_hole": false, 00:13:35.343 "seek_data": false, 00:13:35.343 "copy": true, 00:13:35.343 "nvme_iov_md": false 00:13:35.343 }, 00:13:35.343 "memory_domains": [ 00:13:35.343 { 00:13:35.343 "dma_device_id": "system", 00:13:35.343 "dma_device_type": 1 00:13:35.343 }, 00:13:35.343 { 00:13:35.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.343 "dma_device_type": 2 00:13:35.343 } 00:13:35.343 ], 00:13:35.343 "driver_specific": {} 00:13:35.343 } 00:13:35.343 ] 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:35.343 "name": "Existed_Raid", 00:13:35.343 "uuid": "d6bd8f3a-ef80-419f-b36d-5efffb827191", 00:13:35.343 "strip_size_kb": 64, 00:13:35.343 "state": "online", 00:13:35.343 "raid_level": "raid0", 00:13:35.343 "superblock": false, 00:13:35.343 "num_base_bdevs": 4, 00:13:35.343 "num_base_bdevs_discovered": 4, 00:13:35.343 "num_base_bdevs_operational": 4, 00:13:35.343 "base_bdevs_list": [ 00:13:35.343 { 00:13:35.343 "name": "NewBaseBdev", 00:13:35.343 "uuid": "fd27bf03-79cf-42a9-b06f-315fc1a0110a", 00:13:35.343 "is_configured": true, 00:13:35.343 "data_offset": 0, 00:13:35.343 "data_size": 65536 00:13:35.343 }, 00:13:35.343 { 00:13:35.343 "name": "BaseBdev2", 00:13:35.343 "uuid": "15b03dff-9321-49a3-8e9c-1f375349e561", 00:13:35.343 "is_configured": true, 00:13:35.343 "data_offset": 0, 00:13:35.343 "data_size": 65536 00:13:35.343 }, 00:13:35.343 { 00:13:35.343 "name": "BaseBdev3", 00:13:35.343 "uuid": "226d478d-8d93-4579-bbcf-319a3cdb6aaa", 00:13:35.343 "is_configured": true, 00:13:35.343 "data_offset": 0, 00:13:35.343 "data_size": 65536 00:13:35.343 }, 00:13:35.343 { 00:13:35.343 "name": "BaseBdev4", 00:13:35.343 "uuid": "6435b772-73cd-43c1-8d9b-d0ef4042091e", 00:13:35.343 "is_configured": true, 00:13:35.343 "data_offset": 0, 00:13:35.343 "data_size": 65536 00:13:35.343 } 00:13:35.343 ] 00:13:35.343 }' 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:35.343 06:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.911 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:35.911 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:35.911 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:35.911 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:35.911 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:35.911 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:35.911 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:35.911 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:36.171 [2024-08-14 06:45:03.322872] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.171 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:36.171 "name": "Existed_Raid", 00:13:36.171 "aliases": [ 00:13:36.171 "d6bd8f3a-ef80-419f-b36d-5efffb827191" 00:13:36.171 ], 00:13:36.171 "product_name": "Raid Volume", 00:13:36.171 "block_size": 512, 00:13:36.171 "num_blocks": 262144, 00:13:36.171 "uuid": "d6bd8f3a-ef80-419f-b36d-5efffb827191", 00:13:36.171 "assigned_rate_limits": { 00:13:36.171 "rw_ios_per_sec": 0, 00:13:36.171 "rw_mbytes_per_sec": 0, 00:13:36.171 "r_mbytes_per_sec": 0, 00:13:36.171 "w_mbytes_per_sec": 0 00:13:36.171 }, 00:13:36.171 "claimed": false, 00:13:36.171 "zoned": false, 00:13:36.171 "supported_io_types": { 00:13:36.171 "read": true, 00:13:36.171 "write": true, 00:13:36.171 "unmap": true, 00:13:36.171 "flush": true, 00:13:36.171 "reset": true, 00:13:36.171 "nvme_admin": false, 00:13:36.171 "nvme_io": false, 00:13:36.171 "nvme_io_md": false, 00:13:36.171 "write_zeroes": true, 00:13:36.171 "zcopy": false, 00:13:36.171 "get_zone_info": false, 00:13:36.171 "zone_management": false, 00:13:36.171 "zone_append": false, 00:13:36.171 "compare": false, 00:13:36.171 "compare_and_write": false, 00:13:36.171 "abort": false, 00:13:36.171 "seek_hole": false, 00:13:36.171 "seek_data": false, 00:13:36.171 "copy": false, 00:13:36.171 "nvme_iov_md": false 00:13:36.171 }, 00:13:36.171 "memory_domains": [ 00:13:36.171 { 00:13:36.171 "dma_device_id": "system", 00:13:36.171 "dma_device_type": 1 00:13:36.171 }, 00:13:36.171 { 00:13:36.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.171 "dma_device_type": 2 00:13:36.171 }, 00:13:36.171 { 00:13:36.171 "dma_device_id": "system", 00:13:36.171 "dma_device_type": 1 00:13:36.171 }, 00:13:36.171 { 00:13:36.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.171 "dma_device_type": 2 00:13:36.171 }, 00:13:36.171 { 00:13:36.171 "dma_device_id": "system", 00:13:36.171 "dma_device_type": 1 00:13:36.171 }, 00:13:36.171 { 00:13:36.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.171 "dma_device_type": 2 00:13:36.171 }, 00:13:36.171 { 00:13:36.171 "dma_device_id": "system", 00:13:36.171 "dma_device_type": 1 00:13:36.171 }, 00:13:36.171 { 00:13:36.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.171 "dma_device_type": 2 00:13:36.171 } 00:13:36.171 ], 00:13:36.171 "driver_specific": { 00:13:36.171 "raid": { 00:13:36.171 "uuid": "d6bd8f3a-ef80-419f-b36d-5efffb827191", 00:13:36.171 "strip_size_kb": 64, 00:13:36.171 "state": "online", 00:13:36.171 "raid_level": "raid0", 00:13:36.171 "superblock": false, 00:13:36.171 "num_base_bdevs": 4, 00:13:36.171 "num_base_bdevs_discovered": 4, 00:13:36.171 "num_base_bdevs_operational": 4, 00:13:36.171 "base_bdevs_list": [ 00:13:36.171 { 00:13:36.171 "name": "NewBaseBdev", 00:13:36.171 "uuid": "fd27bf03-79cf-42a9-b06f-315fc1a0110a", 00:13:36.171 "is_configured": true, 00:13:36.172 "data_offset": 0, 00:13:36.172 "data_size": 65536 00:13:36.172 }, 00:13:36.172 { 00:13:36.172 "name": "BaseBdev2", 00:13:36.172 "uuid": "15b03dff-9321-49a3-8e9c-1f375349e561", 00:13:36.172 "is_configured": true, 00:13:36.172 "data_offset": 0, 00:13:36.172 "data_size": 65536 00:13:36.172 }, 00:13:36.172 { 00:13:36.172 "name": "BaseBdev3", 00:13:36.172 "uuid": "226d478d-8d93-4579-bbcf-319a3cdb6aaa", 00:13:36.172 "is_configured": true, 00:13:36.172 "data_offset": 0, 00:13:36.172 "data_size": 65536 00:13:36.172 }, 00:13:36.172 { 00:13:36.172 "name": "BaseBdev4", 00:13:36.172 "uuid": "6435b772-73cd-43c1-8d9b-d0ef4042091e", 00:13:36.172 "is_configured": true, 00:13:36.172 "data_offset": 0, 00:13:36.172 "data_size": 65536 00:13:36.172 } 00:13:36.172 ] 00:13:36.172 } 00:13:36.172 } 00:13:36.172 }' 00:13:36.172 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:36.172 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:36.172 BaseBdev2 00:13:36.172 BaseBdev3 00:13:36.172 BaseBdev4' 00:13:36.172 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:36.172 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:36.172 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:36.431 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:36.431 "name": "NewBaseBdev", 00:13:36.431 "aliases": [ 00:13:36.431 "fd27bf03-79cf-42a9-b06f-315fc1a0110a" 00:13:36.431 ], 00:13:36.431 "product_name": "Malloc disk", 00:13:36.431 "block_size": 512, 00:13:36.431 "num_blocks": 65536, 00:13:36.431 "uuid": "fd27bf03-79cf-42a9-b06f-315fc1a0110a", 00:13:36.431 "assigned_rate_limits": { 00:13:36.431 "rw_ios_per_sec": 0, 00:13:36.431 "rw_mbytes_per_sec": 0, 00:13:36.431 "r_mbytes_per_sec": 0, 00:13:36.431 "w_mbytes_per_sec": 0 00:13:36.431 }, 00:13:36.431 "claimed": true, 00:13:36.431 "claim_type": "exclusive_write", 00:13:36.431 "zoned": false, 00:13:36.431 "supported_io_types": { 00:13:36.431 "read": true, 00:13:36.431 "write": true, 00:13:36.431 "unmap": true, 00:13:36.431 "flush": true, 00:13:36.431 "reset": true, 00:13:36.431 "nvme_admin": false, 00:13:36.431 "nvme_io": false, 00:13:36.431 "nvme_io_md": false, 00:13:36.431 "write_zeroes": true, 00:13:36.431 "zcopy": true, 00:13:36.431 "get_zone_info": false, 00:13:36.431 "zone_management": false, 00:13:36.431 "zone_append": false, 00:13:36.431 "compare": false, 00:13:36.431 "compare_and_write": false, 00:13:36.431 "abort": true, 00:13:36.431 "seek_hole": false, 00:13:36.431 "seek_data": false, 00:13:36.431 "copy": true, 00:13:36.431 "nvme_iov_md": false 00:13:36.431 }, 00:13:36.431 "memory_domains": [ 00:13:36.431 { 00:13:36.431 "dma_device_id": "system", 00:13:36.431 "dma_device_type": 1 00:13:36.431 }, 00:13:36.431 { 00:13:36.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.431 "dma_device_type": 2 00:13:36.431 } 00:13:36.431 ], 00:13:36.431 "driver_specific": {} 00:13:36.431 }' 00:13:36.431 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.431 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.431 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:36.431 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.689 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.689 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:36.689 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.689 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.689 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:36.689 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:36.689 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:36.690 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:36.690 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:36.690 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:36.690 06:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:36.949 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:36.949 "name": "BaseBdev2", 00:13:36.949 "aliases": [ 00:13:36.949 "15b03dff-9321-49a3-8e9c-1f375349e561" 00:13:36.949 ], 00:13:36.949 "product_name": "Malloc disk", 00:13:36.949 "block_size": 512, 00:13:36.949 "num_blocks": 65536, 00:13:36.949 "uuid": "15b03dff-9321-49a3-8e9c-1f375349e561", 00:13:36.949 "assigned_rate_limits": { 00:13:36.949 "rw_ios_per_sec": 0, 00:13:36.949 "rw_mbytes_per_sec": 0, 00:13:36.949 "r_mbytes_per_sec": 0, 00:13:36.949 "w_mbytes_per_sec": 0 00:13:36.949 }, 00:13:36.949 "claimed": true, 00:13:36.949 "claim_type": "exclusive_write", 00:13:36.949 "zoned": false, 00:13:36.949 "supported_io_types": { 00:13:36.949 "read": true, 00:13:36.949 "write": true, 00:13:36.949 "unmap": true, 00:13:36.949 "flush": true, 00:13:36.949 "reset": true, 00:13:36.949 "nvme_admin": false, 00:13:36.949 "nvme_io": false, 00:13:36.949 "nvme_io_md": false, 00:13:36.949 "write_zeroes": true, 00:13:36.949 "zcopy": true, 00:13:36.949 "get_zone_info": false, 00:13:36.949 "zone_management": false, 00:13:36.949 "zone_append": false, 00:13:36.949 "compare": false, 00:13:36.949 "compare_and_write": false, 00:13:36.949 "abort": true, 00:13:36.949 "seek_hole": false, 00:13:36.949 "seek_data": false, 00:13:36.949 "copy": true, 00:13:36.949 "nvme_iov_md": false 00:13:36.949 }, 00:13:36.949 "memory_domains": [ 00:13:36.949 { 00:13:36.949 "dma_device_id": "system", 00:13:36.949 "dma_device_type": 1 00:13:36.949 }, 00:13:36.949 { 00:13:36.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.949 "dma_device_type": 2 00:13:36.949 } 00:13:36.949 ], 00:13:36.949 "driver_specific": {} 00:13:36.949 }' 00:13:36.949 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.949 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:37.208 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:37.208 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:37.208 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:37.208 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:37.208 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.208 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.208 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:37.208 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.208 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.208 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:37.208 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:37.208 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:37.208 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:37.468 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:37.468 "name": "BaseBdev3", 00:13:37.468 "aliases": [ 00:13:37.468 "226d478d-8d93-4579-bbcf-319a3cdb6aaa" 00:13:37.468 ], 00:13:37.468 "product_name": "Malloc disk", 00:13:37.468 "block_size": 512, 00:13:37.468 "num_blocks": 65536, 00:13:37.468 "uuid": "226d478d-8d93-4579-bbcf-319a3cdb6aaa", 00:13:37.468 "assigned_rate_limits": { 00:13:37.468 "rw_ios_per_sec": 0, 00:13:37.468 "rw_mbytes_per_sec": 0, 00:13:37.468 "r_mbytes_per_sec": 0, 00:13:37.468 "w_mbytes_per_sec": 0 00:13:37.468 }, 00:13:37.468 "claimed": true, 00:13:37.468 "claim_type": "exclusive_write", 00:13:37.468 "zoned": false, 00:13:37.468 "supported_io_types": { 00:13:37.468 "read": true, 00:13:37.468 "write": true, 00:13:37.468 "unmap": true, 00:13:37.468 "flush": true, 00:13:37.468 "reset": true, 00:13:37.468 "nvme_admin": false, 00:13:37.468 "nvme_io": false, 00:13:37.468 "nvme_io_md": false, 00:13:37.468 "write_zeroes": true, 00:13:37.468 "zcopy": true, 00:13:37.468 "get_zone_info": false, 00:13:37.468 "zone_management": false, 00:13:37.468 "zone_append": false, 00:13:37.468 "compare": false, 00:13:37.468 "compare_and_write": false, 00:13:37.468 "abort": true, 00:13:37.468 "seek_hole": false, 00:13:37.468 "seek_data": false, 00:13:37.468 "copy": true, 00:13:37.468 "nvme_iov_md": false 00:13:37.468 }, 00:13:37.468 "memory_domains": [ 00:13:37.468 { 00:13:37.468 "dma_device_id": "system", 00:13:37.468 "dma_device_type": 1 00:13:37.468 }, 00:13:37.468 { 00:13:37.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.468 "dma_device_type": 2 00:13:37.468 } 00:13:37.468 ], 00:13:37.468 "driver_specific": {} 00:13:37.468 }' 00:13:37.469 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:37.469 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:37.469 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:37.469 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:37.469 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:37.728 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:37.728 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.728 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.728 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:37.728 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.728 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.728 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:37.728 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:37.728 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:37.728 06:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:37.987 06:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:37.987 "name": "BaseBdev4", 00:13:37.987 "aliases": [ 00:13:37.987 "6435b772-73cd-43c1-8d9b-d0ef4042091e" 00:13:37.987 ], 00:13:37.987 "product_name": "Malloc disk", 00:13:37.987 "block_size": 512, 00:13:37.987 "num_blocks": 65536, 00:13:37.987 "uuid": "6435b772-73cd-43c1-8d9b-d0ef4042091e", 00:13:37.987 "assigned_rate_limits": { 00:13:37.987 "rw_ios_per_sec": 0, 00:13:37.987 "rw_mbytes_per_sec": 0, 00:13:37.987 "r_mbytes_per_sec": 0, 00:13:37.987 "w_mbytes_per_sec": 0 00:13:37.987 }, 00:13:37.987 "claimed": true, 00:13:37.987 "claim_type": "exclusive_write", 00:13:37.987 "zoned": false, 00:13:37.987 "supported_io_types": { 00:13:37.987 "read": true, 00:13:37.987 "write": true, 00:13:37.987 "unmap": true, 00:13:37.987 "flush": true, 00:13:37.987 "reset": true, 00:13:37.987 "nvme_admin": false, 00:13:37.987 "nvme_io": false, 00:13:37.987 "nvme_io_md": false, 00:13:37.987 "write_zeroes": true, 00:13:37.987 "zcopy": true, 00:13:37.987 "get_zone_info": false, 00:13:37.987 "zone_management": false, 00:13:37.987 "zone_append": false, 00:13:37.987 "compare": false, 00:13:37.987 "compare_and_write": false, 00:13:37.987 "abort": true, 00:13:37.987 "seek_hole": false, 00:13:37.987 "seek_data": false, 00:13:37.987 "copy": true, 00:13:37.987 "nvme_iov_md": false 00:13:37.987 }, 00:13:37.987 "memory_domains": [ 00:13:37.987 { 00:13:37.987 "dma_device_id": "system", 00:13:37.987 "dma_device_type": 1 00:13:37.987 }, 00:13:37.987 { 00:13:37.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.987 "dma_device_type": 2 00:13:37.987 } 00:13:37.987 ], 00:13:37.987 "driver_specific": {} 00:13:37.987 }' 00:13:37.987 06:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:37.987 06:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:37.987 06:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:37.987 06:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:38.247 06:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:38.247 06:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:38.247 06:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:38.248 06:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:38.248 06:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:38.248 06:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:38.248 06:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:38.248 06:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:38.248 06:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:38.508 [2024-08-14 06:45:05.666638] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:38.508 [2024-08-14 06:45:05.666689] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:38.508 [2024-08-14 06:45:05.666824] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.508 [2024-08-14 06:45:05.666913] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.508 [2024-08-14 06:45:05.666928] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:38.508 06:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 83539 00:13:38.508 06:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 83539 ']' 00:13:38.508 06:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 83539 00:13:38.508 06:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:13:38.508 06:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:38.508 06:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83539 00:13:38.508 killing process with pid 83539 00:13:38.508 06:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:38.508 06:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:38.508 06:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83539' 00:13:38.508 06:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 83539 00:13:38.508 [2024-08-14 06:45:05.726650] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:38.508 06:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 83539 00:13:38.767 [2024-08-14 06:45:05.804134] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.027 06:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:39.027 00:13:39.027 real 0m27.759s 00:13:39.027 user 0m51.357s 00:13:39.027 sys 0m4.205s 00:13:39.027 06:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:39.027 06:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.027 ************************************ 00:13:39.027 END TEST raid_state_function_test 00:13:39.027 ************************************ 00:13:39.027 06:45:06 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:39.027 06:45:06 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:39.027 06:45:06 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:39.027 06:45:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.027 ************************************ 00:13:39.027 START TEST raid_state_function_test_sb 00:13:39.027 ************************************ 00:13:39.027 06:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 true 00:13:39.027 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:39.027 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:13:39.027 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:39.027 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:39.027 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:39.027 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=84553 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:39.028 Process raid pid: 84553 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 84553' 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 84553 /var/tmp/spdk-raid.sock 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 84553 ']' 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:39.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:39.028 06:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.287 [2024-08-14 06:45:06.329352] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:13:39.287 [2024-08-14 06:45:06.329508] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.287 [2024-08-14 06:45:06.456903] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.287 [2024-08-14 06:45:06.531410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.547 [2024-08-14 06:45:06.607422] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.547 [2024-08-14 06:45:06.607477] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.115 06:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:40.115 06:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:13:40.115 06:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:40.115 [2024-08-14 06:45:07.355581] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.115 [2024-08-14 06:45:07.355668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.115 [2024-08-14 06:45:07.355683] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.115 [2024-08-14 06:45:07.355691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.116 [2024-08-14 06:45:07.355704] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.116 [2024-08-14 06:45:07.355711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.116 [2024-08-14 06:45:07.355723] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:40.116 [2024-08-14 06:45:07.355731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:40.375 06:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:40.375 06:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:40.375 06:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:40.375 06:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:40.375 06:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:40.375 06:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:40.375 06:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:40.375 06:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:40.375 06:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:40.375 06:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:40.375 06:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.375 06:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.375 06:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:40.375 "name": "Existed_Raid", 00:13:40.375 "uuid": "1f4e67bb-e338-42fc-8986-a56f323c5f59", 00:13:40.375 "strip_size_kb": 64, 00:13:40.375 "state": "configuring", 00:13:40.375 "raid_level": "raid0", 00:13:40.375 "superblock": true, 00:13:40.375 "num_base_bdevs": 4, 00:13:40.375 "num_base_bdevs_discovered": 0, 00:13:40.375 "num_base_bdevs_operational": 4, 00:13:40.375 "base_bdevs_list": [ 00:13:40.375 { 00:13:40.375 "name": "BaseBdev1", 00:13:40.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.375 "is_configured": false, 00:13:40.375 "data_offset": 0, 00:13:40.375 "data_size": 0 00:13:40.375 }, 00:13:40.375 { 00:13:40.375 "name": "BaseBdev2", 00:13:40.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.375 "is_configured": false, 00:13:40.375 "data_offset": 0, 00:13:40.375 "data_size": 0 00:13:40.375 }, 00:13:40.375 { 00:13:40.375 "name": "BaseBdev3", 00:13:40.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.375 "is_configured": false, 00:13:40.375 "data_offset": 0, 00:13:40.375 "data_size": 0 00:13:40.375 }, 00:13:40.375 { 00:13:40.375 "name": "BaseBdev4", 00:13:40.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.375 "is_configured": false, 00:13:40.375 "data_offset": 0, 00:13:40.375 "data_size": 0 00:13:40.375 } 00:13:40.375 ] 00:13:40.375 }' 00:13:40.375 06:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:40.375 06:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.944 06:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:41.203 [2024-08-14 06:45:08.241913] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.203 [2024-08-14 06:45:08.241975] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:41.203 06:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:41.203 [2024-08-14 06:45:08.445601] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:41.203 [2024-08-14 06:45:08.445669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:41.203 [2024-08-14 06:45:08.445685] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:41.203 [2024-08-14 06:45:08.445693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:41.203 [2024-08-14 06:45:08.445702] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:41.203 [2024-08-14 06:45:08.445709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:41.203 [2024-08-14 06:45:08.445718] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:41.203 [2024-08-14 06:45:08.445725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:41.463 06:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:41.463 [2024-08-14 06:45:08.656515] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.463 BaseBdev1 00:13:41.463 06:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:41.463 06:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:41.463 06:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:41.463 06:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:41.463 06:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:41.463 06:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:41.463 06:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:41.722 06:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:41.981 [ 00:13:41.981 { 00:13:41.981 "name": "BaseBdev1", 00:13:41.981 "aliases": [ 00:13:41.981 "d061b101-865a-41bb-a661-c8d5c0fd2638" 00:13:41.981 ], 00:13:41.981 "product_name": "Malloc disk", 00:13:41.981 "block_size": 512, 00:13:41.981 "num_blocks": 65536, 00:13:41.981 "uuid": "d061b101-865a-41bb-a661-c8d5c0fd2638", 00:13:41.981 "assigned_rate_limits": { 00:13:41.981 "rw_ios_per_sec": 0, 00:13:41.981 "rw_mbytes_per_sec": 0, 00:13:41.981 "r_mbytes_per_sec": 0, 00:13:41.981 "w_mbytes_per_sec": 0 00:13:41.981 }, 00:13:41.981 "claimed": true, 00:13:41.981 "claim_type": "exclusive_write", 00:13:41.981 "zoned": false, 00:13:41.981 "supported_io_types": { 00:13:41.981 "read": true, 00:13:41.981 "write": true, 00:13:41.981 "unmap": true, 00:13:41.981 "flush": true, 00:13:41.981 "reset": true, 00:13:41.981 "nvme_admin": false, 00:13:41.981 "nvme_io": false, 00:13:41.981 "nvme_io_md": false, 00:13:41.981 "write_zeroes": true, 00:13:41.981 "zcopy": true, 00:13:41.981 "get_zone_info": false, 00:13:41.981 "zone_management": false, 00:13:41.981 "zone_append": false, 00:13:41.981 "compare": false, 00:13:41.981 "compare_and_write": false, 00:13:41.981 "abort": true, 00:13:41.981 "seek_hole": false, 00:13:41.981 "seek_data": false, 00:13:41.981 "copy": true, 00:13:41.981 "nvme_iov_md": false 00:13:41.981 }, 00:13:41.981 "memory_domains": [ 00:13:41.981 { 00:13:41.981 "dma_device_id": "system", 00:13:41.981 "dma_device_type": 1 00:13:41.981 }, 00:13:41.981 { 00:13:41.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.981 "dma_device_type": 2 00:13:41.981 } 00:13:41.981 ], 00:13:41.981 "driver_specific": {} 00:13:41.981 } 00:13:41.981 ] 00:13:41.981 06:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:41.981 06:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:41.981 06:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:41.981 06:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:41.981 06:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:41.981 06:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:41.981 06:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:41.981 06:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:41.981 06:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:41.981 06:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:41.981 06:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:41.982 06:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.982 06:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.241 06:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:42.241 "name": "Existed_Raid", 00:13:42.241 "uuid": "e99d2354-34a5-4981-bd8a-10bdc2d64a71", 00:13:42.241 "strip_size_kb": 64, 00:13:42.241 "state": "configuring", 00:13:42.241 "raid_level": "raid0", 00:13:42.241 "superblock": true, 00:13:42.241 "num_base_bdevs": 4, 00:13:42.241 "num_base_bdevs_discovered": 1, 00:13:42.241 "num_base_bdevs_operational": 4, 00:13:42.241 "base_bdevs_list": [ 00:13:42.241 { 00:13:42.241 "name": "BaseBdev1", 00:13:42.241 "uuid": "d061b101-865a-41bb-a661-c8d5c0fd2638", 00:13:42.241 "is_configured": true, 00:13:42.241 "data_offset": 2048, 00:13:42.241 "data_size": 63488 00:13:42.241 }, 00:13:42.241 { 00:13:42.241 "name": "BaseBdev2", 00:13:42.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.241 "is_configured": false, 00:13:42.241 "data_offset": 0, 00:13:42.241 "data_size": 0 00:13:42.241 }, 00:13:42.241 { 00:13:42.241 "name": "BaseBdev3", 00:13:42.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.241 "is_configured": false, 00:13:42.241 "data_offset": 0, 00:13:42.241 "data_size": 0 00:13:42.241 }, 00:13:42.241 { 00:13:42.241 "name": "BaseBdev4", 00:13:42.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.241 "is_configured": false, 00:13:42.241 "data_offset": 0, 00:13:42.241 "data_size": 0 00:13:42.241 } 00:13:42.241 ] 00:13:42.241 }' 00:13:42.241 06:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:42.241 06:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.810 06:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:42.810 [2024-08-14 06:45:09.986355] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:42.810 [2024-08-14 06:45:09.986457] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:42.810 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:43.069 [2024-08-14 06:45:10.186119] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.069 [2024-08-14 06:45:10.188394] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:43.069 [2024-08-14 06:45:10.188439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:43.069 [2024-08-14 06:45:10.188451] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:43.069 [2024-08-14 06:45:10.188465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:43.069 [2024-08-14 06:45:10.188475] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:43.069 [2024-08-14 06:45:10.188482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:43.069 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:43.069 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:43.069 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:43.069 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:43.069 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:43.069 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:43.069 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:43.069 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:43.069 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:43.069 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:43.069 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:43.069 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:43.069 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.069 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.328 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:43.328 "name": "Existed_Raid", 00:13:43.328 "uuid": "0e164558-8d5a-477d-a55f-672e35dd0b24", 00:13:43.328 "strip_size_kb": 64, 00:13:43.328 "state": "configuring", 00:13:43.328 "raid_level": "raid0", 00:13:43.328 "superblock": true, 00:13:43.328 "num_base_bdevs": 4, 00:13:43.328 "num_base_bdevs_discovered": 1, 00:13:43.328 "num_base_bdevs_operational": 4, 00:13:43.328 "base_bdevs_list": [ 00:13:43.328 { 00:13:43.328 "name": "BaseBdev1", 00:13:43.328 "uuid": "d061b101-865a-41bb-a661-c8d5c0fd2638", 00:13:43.328 "is_configured": true, 00:13:43.328 "data_offset": 2048, 00:13:43.328 "data_size": 63488 00:13:43.328 }, 00:13:43.328 { 00:13:43.328 "name": "BaseBdev2", 00:13:43.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.328 "is_configured": false, 00:13:43.328 "data_offset": 0, 00:13:43.328 "data_size": 0 00:13:43.328 }, 00:13:43.328 { 00:13:43.328 "name": "BaseBdev3", 00:13:43.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.328 "is_configured": false, 00:13:43.328 "data_offset": 0, 00:13:43.328 "data_size": 0 00:13:43.328 }, 00:13:43.328 { 00:13:43.328 "name": "BaseBdev4", 00:13:43.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.328 "is_configured": false, 00:13:43.328 "data_offset": 0, 00:13:43.328 "data_size": 0 00:13:43.328 } 00:13:43.328 ] 00:13:43.328 }' 00:13:43.328 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:43.328 06:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.897 06:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:43.897 [2024-08-14 06:45:11.084607] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:43.897 BaseBdev2 00:13:43.897 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:43.897 06:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:43.897 06:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:43.897 06:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:43.897 06:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:43.897 06:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:43.897 06:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:44.156 06:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:44.416 [ 00:13:44.416 { 00:13:44.416 "name": "BaseBdev2", 00:13:44.416 "aliases": [ 00:13:44.416 "7b21d7d4-c11a-4366-a9fe-e04323193299" 00:13:44.416 ], 00:13:44.416 "product_name": "Malloc disk", 00:13:44.416 "block_size": 512, 00:13:44.416 "num_blocks": 65536, 00:13:44.416 "uuid": "7b21d7d4-c11a-4366-a9fe-e04323193299", 00:13:44.416 "assigned_rate_limits": { 00:13:44.416 "rw_ios_per_sec": 0, 00:13:44.416 "rw_mbytes_per_sec": 0, 00:13:44.416 "r_mbytes_per_sec": 0, 00:13:44.416 "w_mbytes_per_sec": 0 00:13:44.416 }, 00:13:44.416 "claimed": true, 00:13:44.416 "claim_type": "exclusive_write", 00:13:44.416 "zoned": false, 00:13:44.416 "supported_io_types": { 00:13:44.416 "read": true, 00:13:44.416 "write": true, 00:13:44.416 "unmap": true, 00:13:44.416 "flush": true, 00:13:44.416 "reset": true, 00:13:44.416 "nvme_admin": false, 00:13:44.416 "nvme_io": false, 00:13:44.416 "nvme_io_md": false, 00:13:44.416 "write_zeroes": true, 00:13:44.416 "zcopy": true, 00:13:44.416 "get_zone_info": false, 00:13:44.416 "zone_management": false, 00:13:44.416 "zone_append": false, 00:13:44.416 "compare": false, 00:13:44.416 "compare_and_write": false, 00:13:44.416 "abort": true, 00:13:44.416 "seek_hole": false, 00:13:44.416 "seek_data": false, 00:13:44.416 "copy": true, 00:13:44.416 "nvme_iov_md": false 00:13:44.416 }, 00:13:44.416 "memory_domains": [ 00:13:44.416 { 00:13:44.416 "dma_device_id": "system", 00:13:44.416 "dma_device_type": 1 00:13:44.416 }, 00:13:44.416 { 00:13:44.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.416 "dma_device_type": 2 00:13:44.416 } 00:13:44.416 ], 00:13:44.416 "driver_specific": {} 00:13:44.416 } 00:13:44.416 ] 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:44.416 "name": "Existed_Raid", 00:13:44.416 "uuid": "0e164558-8d5a-477d-a55f-672e35dd0b24", 00:13:44.416 "strip_size_kb": 64, 00:13:44.416 "state": "configuring", 00:13:44.416 "raid_level": "raid0", 00:13:44.416 "superblock": true, 00:13:44.416 "num_base_bdevs": 4, 00:13:44.416 "num_base_bdevs_discovered": 2, 00:13:44.416 "num_base_bdevs_operational": 4, 00:13:44.416 "base_bdevs_list": [ 00:13:44.416 { 00:13:44.416 "name": "BaseBdev1", 00:13:44.416 "uuid": "d061b101-865a-41bb-a661-c8d5c0fd2638", 00:13:44.416 "is_configured": true, 00:13:44.416 "data_offset": 2048, 00:13:44.416 "data_size": 63488 00:13:44.416 }, 00:13:44.416 { 00:13:44.416 "name": "BaseBdev2", 00:13:44.416 "uuid": "7b21d7d4-c11a-4366-a9fe-e04323193299", 00:13:44.416 "is_configured": true, 00:13:44.416 "data_offset": 2048, 00:13:44.416 "data_size": 63488 00:13:44.416 }, 00:13:44.416 { 00:13:44.416 "name": "BaseBdev3", 00:13:44.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.416 "is_configured": false, 00:13:44.416 "data_offset": 0, 00:13:44.416 "data_size": 0 00:13:44.416 }, 00:13:44.416 { 00:13:44.416 "name": "BaseBdev4", 00:13:44.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.416 "is_configured": false, 00:13:44.416 "data_offset": 0, 00:13:44.416 "data_size": 0 00:13:44.416 } 00:13:44.416 ] 00:13:44.416 }' 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:44.416 06:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.986 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:45.245 [2024-08-14 06:45:12.363658] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.245 BaseBdev3 00:13:45.245 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:45.245 06:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:13:45.245 06:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:45.245 06:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:45.245 06:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:45.245 06:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:45.245 06:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:45.514 [ 00:13:45.514 { 00:13:45.514 "name": "BaseBdev3", 00:13:45.514 "aliases": [ 00:13:45.514 "063a98be-d75f-4175-a351-dd7626af80fc" 00:13:45.514 ], 00:13:45.514 "product_name": "Malloc disk", 00:13:45.514 "block_size": 512, 00:13:45.514 "num_blocks": 65536, 00:13:45.514 "uuid": "063a98be-d75f-4175-a351-dd7626af80fc", 00:13:45.514 "assigned_rate_limits": { 00:13:45.514 "rw_ios_per_sec": 0, 00:13:45.514 "rw_mbytes_per_sec": 0, 00:13:45.514 "r_mbytes_per_sec": 0, 00:13:45.514 "w_mbytes_per_sec": 0 00:13:45.514 }, 00:13:45.514 "claimed": true, 00:13:45.514 "claim_type": "exclusive_write", 00:13:45.514 "zoned": false, 00:13:45.514 "supported_io_types": { 00:13:45.514 "read": true, 00:13:45.514 "write": true, 00:13:45.514 "unmap": true, 00:13:45.514 "flush": true, 00:13:45.514 "reset": true, 00:13:45.514 "nvme_admin": false, 00:13:45.514 "nvme_io": false, 00:13:45.514 "nvme_io_md": false, 00:13:45.514 "write_zeroes": true, 00:13:45.514 "zcopy": true, 00:13:45.514 "get_zone_info": false, 00:13:45.514 "zone_management": false, 00:13:45.514 "zone_append": false, 00:13:45.514 "compare": false, 00:13:45.514 "compare_and_write": false, 00:13:45.514 "abort": true, 00:13:45.514 "seek_hole": false, 00:13:45.514 "seek_data": false, 00:13:45.514 "copy": true, 00:13:45.514 "nvme_iov_md": false 00:13:45.514 }, 00:13:45.514 "memory_domains": [ 00:13:45.514 { 00:13:45.514 "dma_device_id": "system", 00:13:45.514 "dma_device_type": 1 00:13:45.514 }, 00:13:45.514 { 00:13:45.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.514 "dma_device_type": 2 00:13:45.514 } 00:13:45.514 ], 00:13:45.514 "driver_specific": {} 00:13:45.514 } 00:13:45.514 ] 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.514 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.796 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:45.796 "name": "Existed_Raid", 00:13:45.796 "uuid": "0e164558-8d5a-477d-a55f-672e35dd0b24", 00:13:45.796 "strip_size_kb": 64, 00:13:45.796 "state": "configuring", 00:13:45.796 "raid_level": "raid0", 00:13:45.796 "superblock": true, 00:13:45.796 "num_base_bdevs": 4, 00:13:45.796 "num_base_bdevs_discovered": 3, 00:13:45.796 "num_base_bdevs_operational": 4, 00:13:45.796 "base_bdevs_list": [ 00:13:45.796 { 00:13:45.796 "name": "BaseBdev1", 00:13:45.796 "uuid": "d061b101-865a-41bb-a661-c8d5c0fd2638", 00:13:45.796 "is_configured": true, 00:13:45.796 "data_offset": 2048, 00:13:45.796 "data_size": 63488 00:13:45.796 }, 00:13:45.796 { 00:13:45.796 "name": "BaseBdev2", 00:13:45.796 "uuid": "7b21d7d4-c11a-4366-a9fe-e04323193299", 00:13:45.796 "is_configured": true, 00:13:45.796 "data_offset": 2048, 00:13:45.796 "data_size": 63488 00:13:45.796 }, 00:13:45.796 { 00:13:45.796 "name": "BaseBdev3", 00:13:45.796 "uuid": "063a98be-d75f-4175-a351-dd7626af80fc", 00:13:45.796 "is_configured": true, 00:13:45.796 "data_offset": 2048, 00:13:45.796 "data_size": 63488 00:13:45.796 }, 00:13:45.796 { 00:13:45.796 "name": "BaseBdev4", 00:13:45.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.796 "is_configured": false, 00:13:45.796 "data_offset": 0, 00:13:45.796 "data_size": 0 00:13:45.796 } 00:13:45.796 ] 00:13:45.796 }' 00:13:45.796 06:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:45.797 06:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.381 06:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:46.639 [2024-08-14 06:45:13.726211] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:46.639 [2024-08-14 06:45:13.726444] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:46.639 [2024-08-14 06:45:13.726463] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:46.639 [2024-08-14 06:45:13.726769] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:46.639 [2024-08-14 06:45:13.726932] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:46.639 [2024-08-14 06:45:13.726954] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:46.639 [2024-08-14 06:45:13.727080] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.639 BaseBdev4 00:13:46.639 06:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:13:46.639 06:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:13:46.639 06:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:46.639 06:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:46.639 06:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:46.639 06:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:46.639 06:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:46.898 06:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:47.157 [ 00:13:47.157 { 00:13:47.157 "name": "BaseBdev4", 00:13:47.157 "aliases": [ 00:13:47.157 "a1553cb3-add0-4b64-a58e-f69ffa65501e" 00:13:47.157 ], 00:13:47.157 "product_name": "Malloc disk", 00:13:47.157 "block_size": 512, 00:13:47.157 "num_blocks": 65536, 00:13:47.157 "uuid": "a1553cb3-add0-4b64-a58e-f69ffa65501e", 00:13:47.157 "assigned_rate_limits": { 00:13:47.157 "rw_ios_per_sec": 0, 00:13:47.157 "rw_mbytes_per_sec": 0, 00:13:47.157 "r_mbytes_per_sec": 0, 00:13:47.157 "w_mbytes_per_sec": 0 00:13:47.157 }, 00:13:47.157 "claimed": true, 00:13:47.157 "claim_type": "exclusive_write", 00:13:47.157 "zoned": false, 00:13:47.157 "supported_io_types": { 00:13:47.157 "read": true, 00:13:47.157 "write": true, 00:13:47.157 "unmap": true, 00:13:47.157 "flush": true, 00:13:47.157 "reset": true, 00:13:47.157 "nvme_admin": false, 00:13:47.157 "nvme_io": false, 00:13:47.157 "nvme_io_md": false, 00:13:47.157 "write_zeroes": true, 00:13:47.157 "zcopy": true, 00:13:47.157 "get_zone_info": false, 00:13:47.157 "zone_management": false, 00:13:47.157 "zone_append": false, 00:13:47.157 "compare": false, 00:13:47.157 "compare_and_write": false, 00:13:47.157 "abort": true, 00:13:47.157 "seek_hole": false, 00:13:47.157 "seek_data": false, 00:13:47.157 "copy": true, 00:13:47.157 "nvme_iov_md": false 00:13:47.157 }, 00:13:47.158 "memory_domains": [ 00:13:47.158 { 00:13:47.158 "dma_device_id": "system", 00:13:47.158 "dma_device_type": 1 00:13:47.158 }, 00:13:47.158 { 00:13:47.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.158 "dma_device_type": 2 00:13:47.158 } 00:13:47.158 ], 00:13:47.158 "driver_specific": {} 00:13:47.158 } 00:13:47.158 ] 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:47.158 "name": "Existed_Raid", 00:13:47.158 "uuid": "0e164558-8d5a-477d-a55f-672e35dd0b24", 00:13:47.158 "strip_size_kb": 64, 00:13:47.158 "state": "online", 00:13:47.158 "raid_level": "raid0", 00:13:47.158 "superblock": true, 00:13:47.158 "num_base_bdevs": 4, 00:13:47.158 "num_base_bdevs_discovered": 4, 00:13:47.158 "num_base_bdevs_operational": 4, 00:13:47.158 "base_bdevs_list": [ 00:13:47.158 { 00:13:47.158 "name": "BaseBdev1", 00:13:47.158 "uuid": "d061b101-865a-41bb-a661-c8d5c0fd2638", 00:13:47.158 "is_configured": true, 00:13:47.158 "data_offset": 2048, 00:13:47.158 "data_size": 63488 00:13:47.158 }, 00:13:47.158 { 00:13:47.158 "name": "BaseBdev2", 00:13:47.158 "uuid": "7b21d7d4-c11a-4366-a9fe-e04323193299", 00:13:47.158 "is_configured": true, 00:13:47.158 "data_offset": 2048, 00:13:47.158 "data_size": 63488 00:13:47.158 }, 00:13:47.158 { 00:13:47.158 "name": "BaseBdev3", 00:13:47.158 "uuid": "063a98be-d75f-4175-a351-dd7626af80fc", 00:13:47.158 "is_configured": true, 00:13:47.158 "data_offset": 2048, 00:13:47.158 "data_size": 63488 00:13:47.158 }, 00:13:47.158 { 00:13:47.158 "name": "BaseBdev4", 00:13:47.158 "uuid": "a1553cb3-add0-4b64-a58e-f69ffa65501e", 00:13:47.158 "is_configured": true, 00:13:47.158 "data_offset": 2048, 00:13:47.158 "data_size": 63488 00:13:47.158 } 00:13:47.158 ] 00:13:47.158 }' 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:47.158 06:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.725 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:47.725 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:47.726 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:47.726 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:47.726 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:47.726 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:47.726 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:47.726 06:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:47.985 [2024-08-14 06:45:15.160337] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.985 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:47.985 "name": "Existed_Raid", 00:13:47.985 "aliases": [ 00:13:47.985 "0e164558-8d5a-477d-a55f-672e35dd0b24" 00:13:47.985 ], 00:13:47.985 "product_name": "Raid Volume", 00:13:47.985 "block_size": 512, 00:13:47.985 "num_blocks": 253952, 00:13:47.985 "uuid": "0e164558-8d5a-477d-a55f-672e35dd0b24", 00:13:47.985 "assigned_rate_limits": { 00:13:47.985 "rw_ios_per_sec": 0, 00:13:47.985 "rw_mbytes_per_sec": 0, 00:13:47.985 "r_mbytes_per_sec": 0, 00:13:47.985 "w_mbytes_per_sec": 0 00:13:47.985 }, 00:13:47.985 "claimed": false, 00:13:47.985 "zoned": false, 00:13:47.985 "supported_io_types": { 00:13:47.985 "read": true, 00:13:47.985 "write": true, 00:13:47.985 "unmap": true, 00:13:47.985 "flush": true, 00:13:47.985 "reset": true, 00:13:47.985 "nvme_admin": false, 00:13:47.985 "nvme_io": false, 00:13:47.985 "nvme_io_md": false, 00:13:47.985 "write_zeroes": true, 00:13:47.985 "zcopy": false, 00:13:47.985 "get_zone_info": false, 00:13:47.985 "zone_management": false, 00:13:47.985 "zone_append": false, 00:13:47.985 "compare": false, 00:13:47.985 "compare_and_write": false, 00:13:47.985 "abort": false, 00:13:47.985 "seek_hole": false, 00:13:47.985 "seek_data": false, 00:13:47.985 "copy": false, 00:13:47.985 "nvme_iov_md": false 00:13:47.985 }, 00:13:47.985 "memory_domains": [ 00:13:47.985 { 00:13:47.985 "dma_device_id": "system", 00:13:47.985 "dma_device_type": 1 00:13:47.985 }, 00:13:47.985 { 00:13:47.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.985 "dma_device_type": 2 00:13:47.985 }, 00:13:47.985 { 00:13:47.985 "dma_device_id": "system", 00:13:47.985 "dma_device_type": 1 00:13:47.985 }, 00:13:47.985 { 00:13:47.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.985 "dma_device_type": 2 00:13:47.985 }, 00:13:47.985 { 00:13:47.985 "dma_device_id": "system", 00:13:47.985 "dma_device_type": 1 00:13:47.985 }, 00:13:47.985 { 00:13:47.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.985 "dma_device_type": 2 00:13:47.985 }, 00:13:47.985 { 00:13:47.985 "dma_device_id": "system", 00:13:47.985 "dma_device_type": 1 00:13:47.985 }, 00:13:47.985 { 00:13:47.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.985 "dma_device_type": 2 00:13:47.985 } 00:13:47.985 ], 00:13:47.985 "driver_specific": { 00:13:47.985 "raid": { 00:13:47.985 "uuid": "0e164558-8d5a-477d-a55f-672e35dd0b24", 00:13:47.985 "strip_size_kb": 64, 00:13:47.985 "state": "online", 00:13:47.985 "raid_level": "raid0", 00:13:47.985 "superblock": true, 00:13:47.985 "num_base_bdevs": 4, 00:13:47.985 "num_base_bdevs_discovered": 4, 00:13:47.985 "num_base_bdevs_operational": 4, 00:13:47.985 "base_bdevs_list": [ 00:13:47.985 { 00:13:47.985 "name": "BaseBdev1", 00:13:47.985 "uuid": "d061b101-865a-41bb-a661-c8d5c0fd2638", 00:13:47.985 "is_configured": true, 00:13:47.985 "data_offset": 2048, 00:13:47.985 "data_size": 63488 00:13:47.985 }, 00:13:47.985 { 00:13:47.985 "name": "BaseBdev2", 00:13:47.985 "uuid": "7b21d7d4-c11a-4366-a9fe-e04323193299", 00:13:47.985 "is_configured": true, 00:13:47.985 "data_offset": 2048, 00:13:47.985 "data_size": 63488 00:13:47.985 }, 00:13:47.985 { 00:13:47.985 "name": "BaseBdev3", 00:13:47.985 "uuid": "063a98be-d75f-4175-a351-dd7626af80fc", 00:13:47.985 "is_configured": true, 00:13:47.985 "data_offset": 2048, 00:13:47.985 "data_size": 63488 00:13:47.985 }, 00:13:47.985 { 00:13:47.985 "name": "BaseBdev4", 00:13:47.985 "uuid": "a1553cb3-add0-4b64-a58e-f69ffa65501e", 00:13:47.985 "is_configured": true, 00:13:47.985 "data_offset": 2048, 00:13:47.985 "data_size": 63488 00:13:47.985 } 00:13:47.985 ] 00:13:47.985 } 00:13:47.985 } 00:13:47.985 }' 00:13:47.985 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:47.985 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:47.985 BaseBdev2 00:13:47.985 BaseBdev3 00:13:47.985 BaseBdev4' 00:13:47.985 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:47.985 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:47.985 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:48.244 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:48.244 "name": "BaseBdev1", 00:13:48.244 "aliases": [ 00:13:48.244 "d061b101-865a-41bb-a661-c8d5c0fd2638" 00:13:48.244 ], 00:13:48.244 "product_name": "Malloc disk", 00:13:48.244 "block_size": 512, 00:13:48.244 "num_blocks": 65536, 00:13:48.244 "uuid": "d061b101-865a-41bb-a661-c8d5c0fd2638", 00:13:48.244 "assigned_rate_limits": { 00:13:48.244 "rw_ios_per_sec": 0, 00:13:48.244 "rw_mbytes_per_sec": 0, 00:13:48.244 "r_mbytes_per_sec": 0, 00:13:48.244 "w_mbytes_per_sec": 0 00:13:48.244 }, 00:13:48.244 "claimed": true, 00:13:48.244 "claim_type": "exclusive_write", 00:13:48.244 "zoned": false, 00:13:48.244 "supported_io_types": { 00:13:48.244 "read": true, 00:13:48.244 "write": true, 00:13:48.244 "unmap": true, 00:13:48.244 "flush": true, 00:13:48.245 "reset": true, 00:13:48.245 "nvme_admin": false, 00:13:48.245 "nvme_io": false, 00:13:48.245 "nvme_io_md": false, 00:13:48.245 "write_zeroes": true, 00:13:48.245 "zcopy": true, 00:13:48.245 "get_zone_info": false, 00:13:48.245 "zone_management": false, 00:13:48.245 "zone_append": false, 00:13:48.245 "compare": false, 00:13:48.245 "compare_and_write": false, 00:13:48.245 "abort": true, 00:13:48.245 "seek_hole": false, 00:13:48.245 "seek_data": false, 00:13:48.245 "copy": true, 00:13:48.245 "nvme_iov_md": false 00:13:48.245 }, 00:13:48.245 "memory_domains": [ 00:13:48.245 { 00:13:48.245 "dma_device_id": "system", 00:13:48.245 "dma_device_type": 1 00:13:48.245 }, 00:13:48.245 { 00:13:48.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.245 "dma_device_type": 2 00:13:48.245 } 00:13:48.245 ], 00:13:48.245 "driver_specific": {} 00:13:48.245 }' 00:13:48.245 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:48.245 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:48.245 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:48.245 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:48.504 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:48.504 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:48.504 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:48.504 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:48.504 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:48.504 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:48.504 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:48.504 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:48.504 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:48.504 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:48.504 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:48.764 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:48.764 "name": "BaseBdev2", 00:13:48.764 "aliases": [ 00:13:48.764 "7b21d7d4-c11a-4366-a9fe-e04323193299" 00:13:48.764 ], 00:13:48.764 "product_name": "Malloc disk", 00:13:48.764 "block_size": 512, 00:13:48.764 "num_blocks": 65536, 00:13:48.764 "uuid": "7b21d7d4-c11a-4366-a9fe-e04323193299", 00:13:48.764 "assigned_rate_limits": { 00:13:48.764 "rw_ios_per_sec": 0, 00:13:48.764 "rw_mbytes_per_sec": 0, 00:13:48.764 "r_mbytes_per_sec": 0, 00:13:48.764 "w_mbytes_per_sec": 0 00:13:48.764 }, 00:13:48.764 "claimed": true, 00:13:48.764 "claim_type": "exclusive_write", 00:13:48.764 "zoned": false, 00:13:48.764 "supported_io_types": { 00:13:48.764 "read": true, 00:13:48.764 "write": true, 00:13:48.764 "unmap": true, 00:13:48.764 "flush": true, 00:13:48.764 "reset": true, 00:13:48.764 "nvme_admin": false, 00:13:48.764 "nvme_io": false, 00:13:48.764 "nvme_io_md": false, 00:13:48.764 "write_zeroes": true, 00:13:48.764 "zcopy": true, 00:13:48.764 "get_zone_info": false, 00:13:48.764 "zone_management": false, 00:13:48.764 "zone_append": false, 00:13:48.764 "compare": false, 00:13:48.764 "compare_and_write": false, 00:13:48.764 "abort": true, 00:13:48.764 "seek_hole": false, 00:13:48.764 "seek_data": false, 00:13:48.764 "copy": true, 00:13:48.764 "nvme_iov_md": false 00:13:48.764 }, 00:13:48.764 "memory_domains": [ 00:13:48.764 { 00:13:48.764 "dma_device_id": "system", 00:13:48.764 "dma_device_type": 1 00:13:48.764 }, 00:13:48.764 { 00:13:48.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.764 "dma_device_type": 2 00:13:48.764 } 00:13:48.764 ], 00:13:48.764 "driver_specific": {} 00:13:48.764 }' 00:13:48.764 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:48.764 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:48.764 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:48.764 06:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:48.764 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:49.023 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:49.023 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:49.023 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:49.023 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:49.023 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:49.024 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:49.024 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:49.024 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:49.024 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:49.024 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:49.282 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:49.283 "name": "BaseBdev3", 00:13:49.283 "aliases": [ 00:13:49.283 "063a98be-d75f-4175-a351-dd7626af80fc" 00:13:49.283 ], 00:13:49.283 "product_name": "Malloc disk", 00:13:49.283 "block_size": 512, 00:13:49.283 "num_blocks": 65536, 00:13:49.283 "uuid": "063a98be-d75f-4175-a351-dd7626af80fc", 00:13:49.283 "assigned_rate_limits": { 00:13:49.283 "rw_ios_per_sec": 0, 00:13:49.283 "rw_mbytes_per_sec": 0, 00:13:49.283 "r_mbytes_per_sec": 0, 00:13:49.283 "w_mbytes_per_sec": 0 00:13:49.283 }, 00:13:49.283 "claimed": true, 00:13:49.283 "claim_type": "exclusive_write", 00:13:49.283 "zoned": false, 00:13:49.283 "supported_io_types": { 00:13:49.283 "read": true, 00:13:49.283 "write": true, 00:13:49.283 "unmap": true, 00:13:49.283 "flush": true, 00:13:49.283 "reset": true, 00:13:49.283 "nvme_admin": false, 00:13:49.283 "nvme_io": false, 00:13:49.283 "nvme_io_md": false, 00:13:49.283 "write_zeroes": true, 00:13:49.283 "zcopy": true, 00:13:49.283 "get_zone_info": false, 00:13:49.283 "zone_management": false, 00:13:49.283 "zone_append": false, 00:13:49.283 "compare": false, 00:13:49.283 "compare_and_write": false, 00:13:49.283 "abort": true, 00:13:49.283 "seek_hole": false, 00:13:49.283 "seek_data": false, 00:13:49.283 "copy": true, 00:13:49.283 "nvme_iov_md": false 00:13:49.283 }, 00:13:49.283 "memory_domains": [ 00:13:49.283 { 00:13:49.283 "dma_device_id": "system", 00:13:49.283 "dma_device_type": 1 00:13:49.283 }, 00:13:49.283 { 00:13:49.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.283 "dma_device_type": 2 00:13:49.283 } 00:13:49.283 ], 00:13:49.283 "driver_specific": {} 00:13:49.283 }' 00:13:49.283 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:49.283 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:49.283 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:49.283 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:49.542 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:49.542 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:49.542 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:49.542 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:49.542 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:49.542 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:49.542 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:49.542 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:49.542 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:49.542 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:49.542 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:49.801 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:49.801 "name": "BaseBdev4", 00:13:49.801 "aliases": [ 00:13:49.801 "a1553cb3-add0-4b64-a58e-f69ffa65501e" 00:13:49.801 ], 00:13:49.801 "product_name": "Malloc disk", 00:13:49.801 "block_size": 512, 00:13:49.801 "num_blocks": 65536, 00:13:49.801 "uuid": "a1553cb3-add0-4b64-a58e-f69ffa65501e", 00:13:49.801 "assigned_rate_limits": { 00:13:49.801 "rw_ios_per_sec": 0, 00:13:49.801 "rw_mbytes_per_sec": 0, 00:13:49.801 "r_mbytes_per_sec": 0, 00:13:49.801 "w_mbytes_per_sec": 0 00:13:49.801 }, 00:13:49.801 "claimed": true, 00:13:49.801 "claim_type": "exclusive_write", 00:13:49.801 "zoned": false, 00:13:49.801 "supported_io_types": { 00:13:49.801 "read": true, 00:13:49.801 "write": true, 00:13:49.801 "unmap": true, 00:13:49.801 "flush": true, 00:13:49.801 "reset": true, 00:13:49.801 "nvme_admin": false, 00:13:49.801 "nvme_io": false, 00:13:49.801 "nvme_io_md": false, 00:13:49.801 "write_zeroes": true, 00:13:49.801 "zcopy": true, 00:13:49.801 "get_zone_info": false, 00:13:49.801 "zone_management": false, 00:13:49.801 "zone_append": false, 00:13:49.801 "compare": false, 00:13:49.801 "compare_and_write": false, 00:13:49.801 "abort": true, 00:13:49.801 "seek_hole": false, 00:13:49.801 "seek_data": false, 00:13:49.801 "copy": true, 00:13:49.801 "nvme_iov_md": false 00:13:49.801 }, 00:13:49.801 "memory_domains": [ 00:13:49.801 { 00:13:49.801 "dma_device_id": "system", 00:13:49.801 "dma_device_type": 1 00:13:49.801 }, 00:13:49.801 { 00:13:49.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.801 "dma_device_type": 2 00:13:49.801 } 00:13:49.801 ], 00:13:49.801 "driver_specific": {} 00:13:49.801 }' 00:13:49.801 06:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:49.801 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:50.061 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:50.061 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:50.061 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:50.061 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:50.061 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:50.061 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:50.061 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:50.061 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.320 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.320 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:50.320 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:50.320 [2024-08-14 06:45:17.536251] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:50.320 [2024-08-14 06:45:17.536300] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.320 [2024-08-14 06:45:17.536376] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.579 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:50.579 "name": "Existed_Raid", 00:13:50.579 "uuid": "0e164558-8d5a-477d-a55f-672e35dd0b24", 00:13:50.579 "strip_size_kb": 64, 00:13:50.579 "state": "offline", 00:13:50.579 "raid_level": "raid0", 00:13:50.579 "superblock": true, 00:13:50.579 "num_base_bdevs": 4, 00:13:50.579 "num_base_bdevs_discovered": 3, 00:13:50.579 "num_base_bdevs_operational": 3, 00:13:50.579 "base_bdevs_list": [ 00:13:50.579 { 00:13:50.579 "name": null, 00:13:50.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.579 "is_configured": false, 00:13:50.579 "data_offset": 2048, 00:13:50.579 "data_size": 63488 00:13:50.579 }, 00:13:50.579 { 00:13:50.579 "name": "BaseBdev2", 00:13:50.579 "uuid": "7b21d7d4-c11a-4366-a9fe-e04323193299", 00:13:50.579 "is_configured": true, 00:13:50.579 "data_offset": 2048, 00:13:50.579 "data_size": 63488 00:13:50.579 }, 00:13:50.579 { 00:13:50.579 "name": "BaseBdev3", 00:13:50.579 "uuid": "063a98be-d75f-4175-a351-dd7626af80fc", 00:13:50.579 "is_configured": true, 00:13:50.579 "data_offset": 2048, 00:13:50.580 "data_size": 63488 00:13:50.580 }, 00:13:50.580 { 00:13:50.580 "name": "BaseBdev4", 00:13:50.580 "uuid": "a1553cb3-add0-4b64-a58e-f69ffa65501e", 00:13:50.580 "is_configured": true, 00:13:50.580 "data_offset": 2048, 00:13:50.580 "data_size": 63488 00:13:50.580 } 00:13:50.580 ] 00:13:50.580 }' 00:13:50.580 06:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:50.580 06:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.167 06:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:51.167 06:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:51.167 06:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:51.167 06:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.426 06:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:51.426 06:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:51.426 06:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:51.685 [2024-08-14 06:45:18.687069] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:51.685 06:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:51.685 06:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:51.685 06:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.685 06:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:51.685 06:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:51.685 06:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:51.685 06:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:51.945 [2024-08-14 06:45:19.082648] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:51.945 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:51.945 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:51.945 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.945 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:52.204 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:52.204 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:52.205 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:52.464 [2024-08-14 06:45:19.502682] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:52.464 [2024-08-14 06:45:19.502766] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:52.464 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:52.464 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:52.464 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.464 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:52.724 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:52.724 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:52.724 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:13:52.724 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:52.724 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:52.724 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:52.724 BaseBdev2 00:13:52.724 06:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:52.724 06:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:52.725 06:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:52.725 06:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:52.725 06:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:52.725 06:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:52.725 06:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:52.985 06:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:53.245 [ 00:13:53.245 { 00:13:53.245 "name": "BaseBdev2", 00:13:53.245 "aliases": [ 00:13:53.245 "15c1ba09-0b3c-47c9-85aa-19eac0f55ad4" 00:13:53.245 ], 00:13:53.245 "product_name": "Malloc disk", 00:13:53.245 "block_size": 512, 00:13:53.245 "num_blocks": 65536, 00:13:53.245 "uuid": "15c1ba09-0b3c-47c9-85aa-19eac0f55ad4", 00:13:53.245 "assigned_rate_limits": { 00:13:53.245 "rw_ios_per_sec": 0, 00:13:53.245 "rw_mbytes_per_sec": 0, 00:13:53.245 "r_mbytes_per_sec": 0, 00:13:53.245 "w_mbytes_per_sec": 0 00:13:53.245 }, 00:13:53.245 "claimed": false, 00:13:53.245 "zoned": false, 00:13:53.245 "supported_io_types": { 00:13:53.245 "read": true, 00:13:53.245 "write": true, 00:13:53.245 "unmap": true, 00:13:53.245 "flush": true, 00:13:53.245 "reset": true, 00:13:53.245 "nvme_admin": false, 00:13:53.245 "nvme_io": false, 00:13:53.245 "nvme_io_md": false, 00:13:53.245 "write_zeroes": true, 00:13:53.245 "zcopy": true, 00:13:53.245 "get_zone_info": false, 00:13:53.245 "zone_management": false, 00:13:53.245 "zone_append": false, 00:13:53.245 "compare": false, 00:13:53.245 "compare_and_write": false, 00:13:53.245 "abort": true, 00:13:53.245 "seek_hole": false, 00:13:53.245 "seek_data": false, 00:13:53.245 "copy": true, 00:13:53.245 "nvme_iov_md": false 00:13:53.245 }, 00:13:53.245 "memory_domains": [ 00:13:53.245 { 00:13:53.245 "dma_device_id": "system", 00:13:53.245 "dma_device_type": 1 00:13:53.245 }, 00:13:53.245 { 00:13:53.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.246 "dma_device_type": 2 00:13:53.246 } 00:13:53.246 ], 00:13:53.246 "driver_specific": {} 00:13:53.246 } 00:13:53.246 ] 00:13:53.246 06:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:53.246 06:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:53.246 06:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:53.246 06:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:53.246 BaseBdev3 00:13:53.246 06:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:53.246 06:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:13:53.246 06:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:53.246 06:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:53.246 06:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:53.246 06:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:53.246 06:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:53.506 06:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:53.766 [ 00:13:53.766 { 00:13:53.766 "name": "BaseBdev3", 00:13:53.766 "aliases": [ 00:13:53.766 "2878295b-be35-485c-9da2-0b920bb23685" 00:13:53.766 ], 00:13:53.766 "product_name": "Malloc disk", 00:13:53.766 "block_size": 512, 00:13:53.766 "num_blocks": 65536, 00:13:53.766 "uuid": "2878295b-be35-485c-9da2-0b920bb23685", 00:13:53.766 "assigned_rate_limits": { 00:13:53.766 "rw_ios_per_sec": 0, 00:13:53.766 "rw_mbytes_per_sec": 0, 00:13:53.766 "r_mbytes_per_sec": 0, 00:13:53.766 "w_mbytes_per_sec": 0 00:13:53.766 }, 00:13:53.766 "claimed": false, 00:13:53.766 "zoned": false, 00:13:53.766 "supported_io_types": { 00:13:53.766 "read": true, 00:13:53.766 "write": true, 00:13:53.766 "unmap": true, 00:13:53.766 "flush": true, 00:13:53.766 "reset": true, 00:13:53.766 "nvme_admin": false, 00:13:53.766 "nvme_io": false, 00:13:53.766 "nvme_io_md": false, 00:13:53.766 "write_zeroes": true, 00:13:53.766 "zcopy": true, 00:13:53.766 "get_zone_info": false, 00:13:53.766 "zone_management": false, 00:13:53.766 "zone_append": false, 00:13:53.766 "compare": false, 00:13:53.766 "compare_and_write": false, 00:13:53.766 "abort": true, 00:13:53.766 "seek_hole": false, 00:13:53.766 "seek_data": false, 00:13:53.766 "copy": true, 00:13:53.766 "nvme_iov_md": false 00:13:53.766 }, 00:13:53.766 "memory_domains": [ 00:13:53.766 { 00:13:53.766 "dma_device_id": "system", 00:13:53.766 "dma_device_type": 1 00:13:53.766 }, 00:13:53.766 { 00:13:53.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.766 "dma_device_type": 2 00:13:53.766 } 00:13:53.766 ], 00:13:53.766 "driver_specific": {} 00:13:53.766 } 00:13:53.766 ] 00:13:53.766 06:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:53.766 06:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:53.766 06:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:53.766 06:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:53.766 BaseBdev4 00:13:53.766 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:13:53.766 06:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:13:53.766 06:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:53.766 06:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:53.766 06:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:53.766 06:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:53.766 06:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:54.026 06:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:54.287 [ 00:13:54.287 { 00:13:54.287 "name": "BaseBdev4", 00:13:54.287 "aliases": [ 00:13:54.287 "a053ef91-b9ab-4125-8194-3e0f55806b7d" 00:13:54.287 ], 00:13:54.287 "product_name": "Malloc disk", 00:13:54.287 "block_size": 512, 00:13:54.287 "num_blocks": 65536, 00:13:54.287 "uuid": "a053ef91-b9ab-4125-8194-3e0f55806b7d", 00:13:54.287 "assigned_rate_limits": { 00:13:54.287 "rw_ios_per_sec": 0, 00:13:54.287 "rw_mbytes_per_sec": 0, 00:13:54.287 "r_mbytes_per_sec": 0, 00:13:54.287 "w_mbytes_per_sec": 0 00:13:54.287 }, 00:13:54.287 "claimed": false, 00:13:54.287 "zoned": false, 00:13:54.287 "supported_io_types": { 00:13:54.287 "read": true, 00:13:54.287 "write": true, 00:13:54.287 "unmap": true, 00:13:54.287 "flush": true, 00:13:54.287 "reset": true, 00:13:54.287 "nvme_admin": false, 00:13:54.287 "nvme_io": false, 00:13:54.287 "nvme_io_md": false, 00:13:54.287 "write_zeroes": true, 00:13:54.287 "zcopy": true, 00:13:54.287 "get_zone_info": false, 00:13:54.287 "zone_management": false, 00:13:54.287 "zone_append": false, 00:13:54.287 "compare": false, 00:13:54.287 "compare_and_write": false, 00:13:54.287 "abort": true, 00:13:54.287 "seek_hole": false, 00:13:54.287 "seek_data": false, 00:13:54.287 "copy": true, 00:13:54.287 "nvme_iov_md": false 00:13:54.287 }, 00:13:54.287 "memory_domains": [ 00:13:54.287 { 00:13:54.287 "dma_device_id": "system", 00:13:54.287 "dma_device_type": 1 00:13:54.287 }, 00:13:54.287 { 00:13:54.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.287 "dma_device_type": 2 00:13:54.287 } 00:13:54.287 ], 00:13:54.287 "driver_specific": {} 00:13:54.287 } 00:13:54.287 ] 00:13:54.287 06:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:54.287 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:54.287 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:54.287 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:54.287 [2024-08-14 06:45:21.513382] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:54.287 [2024-08-14 06:45:21.513455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:54.287 [2024-08-14 06:45:21.513486] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.287 [2024-08-14 06:45:21.515643] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.287 [2024-08-14 06:45:21.515699] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:54.287 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:54.287 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:54.287 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:54.287 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:54.288 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:54.288 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:54.288 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:54.288 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:54.288 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:54.288 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:54.288 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.288 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.548 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:54.548 "name": "Existed_Raid", 00:13:54.548 "uuid": "a9bdefb2-9cf5-46a8-a280-b15e309c4e9c", 00:13:54.548 "strip_size_kb": 64, 00:13:54.548 "state": "configuring", 00:13:54.548 "raid_level": "raid0", 00:13:54.548 "superblock": true, 00:13:54.548 "num_base_bdevs": 4, 00:13:54.548 "num_base_bdevs_discovered": 3, 00:13:54.548 "num_base_bdevs_operational": 4, 00:13:54.548 "base_bdevs_list": [ 00:13:54.548 { 00:13:54.548 "name": "BaseBdev1", 00:13:54.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.548 "is_configured": false, 00:13:54.548 "data_offset": 0, 00:13:54.548 "data_size": 0 00:13:54.548 }, 00:13:54.548 { 00:13:54.548 "name": "BaseBdev2", 00:13:54.548 "uuid": "15c1ba09-0b3c-47c9-85aa-19eac0f55ad4", 00:13:54.548 "is_configured": true, 00:13:54.548 "data_offset": 2048, 00:13:54.548 "data_size": 63488 00:13:54.548 }, 00:13:54.548 { 00:13:54.548 "name": "BaseBdev3", 00:13:54.548 "uuid": "2878295b-be35-485c-9da2-0b920bb23685", 00:13:54.548 "is_configured": true, 00:13:54.548 "data_offset": 2048, 00:13:54.548 "data_size": 63488 00:13:54.548 }, 00:13:54.548 { 00:13:54.548 "name": "BaseBdev4", 00:13:54.548 "uuid": "a053ef91-b9ab-4125-8194-3e0f55806b7d", 00:13:54.548 "is_configured": true, 00:13:54.548 "data_offset": 2048, 00:13:54.548 "data_size": 63488 00:13:54.548 } 00:13:54.548 ] 00:13:54.548 }' 00:13:54.548 06:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:54.548 06:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.117 06:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:55.377 [2024-08-14 06:45:22.423806] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:55.377 06:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:55.377 06:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:55.377 06:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:55.378 06:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:55.378 06:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:55.378 06:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:55.378 06:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:55.378 06:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:55.378 06:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:55.378 06:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:55.378 06:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.378 06:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.637 06:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:55.637 "name": "Existed_Raid", 00:13:55.637 "uuid": "a9bdefb2-9cf5-46a8-a280-b15e309c4e9c", 00:13:55.637 "strip_size_kb": 64, 00:13:55.637 "state": "configuring", 00:13:55.637 "raid_level": "raid0", 00:13:55.637 "superblock": true, 00:13:55.637 "num_base_bdevs": 4, 00:13:55.637 "num_base_bdevs_discovered": 2, 00:13:55.637 "num_base_bdevs_operational": 4, 00:13:55.637 "base_bdevs_list": [ 00:13:55.637 { 00:13:55.637 "name": "BaseBdev1", 00:13:55.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.637 "is_configured": false, 00:13:55.637 "data_offset": 0, 00:13:55.637 "data_size": 0 00:13:55.637 }, 00:13:55.637 { 00:13:55.637 "name": null, 00:13:55.637 "uuid": "15c1ba09-0b3c-47c9-85aa-19eac0f55ad4", 00:13:55.637 "is_configured": false, 00:13:55.637 "data_offset": 2048, 00:13:55.637 "data_size": 63488 00:13:55.637 }, 00:13:55.637 { 00:13:55.637 "name": "BaseBdev3", 00:13:55.637 "uuid": "2878295b-be35-485c-9da2-0b920bb23685", 00:13:55.637 "is_configured": true, 00:13:55.638 "data_offset": 2048, 00:13:55.638 "data_size": 63488 00:13:55.638 }, 00:13:55.638 { 00:13:55.638 "name": "BaseBdev4", 00:13:55.638 "uuid": "a053ef91-b9ab-4125-8194-3e0f55806b7d", 00:13:55.638 "is_configured": true, 00:13:55.638 "data_offset": 2048, 00:13:55.638 "data_size": 63488 00:13:55.638 } 00:13:55.638 ] 00:13:55.638 }' 00:13:55.638 06:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:55.638 06:45:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.207 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.207 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:56.207 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:56.207 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.466 [2024-08-14 06:45:23.590881] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.466 BaseBdev1 00:13:56.466 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:56.467 06:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:56.467 06:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:56.467 06:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:56.467 06:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:56.467 06:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:56.467 06:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:56.730 06:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:56.730 [ 00:13:56.730 { 00:13:56.730 "name": "BaseBdev1", 00:13:56.730 "aliases": [ 00:13:56.730 "2a96712b-c35b-452c-8b5c-bf08c5eb60f5" 00:13:56.730 ], 00:13:56.730 "product_name": "Malloc disk", 00:13:56.730 "block_size": 512, 00:13:56.730 "num_blocks": 65536, 00:13:56.730 "uuid": "2a96712b-c35b-452c-8b5c-bf08c5eb60f5", 00:13:56.730 "assigned_rate_limits": { 00:13:56.730 "rw_ios_per_sec": 0, 00:13:56.730 "rw_mbytes_per_sec": 0, 00:13:56.730 "r_mbytes_per_sec": 0, 00:13:56.730 "w_mbytes_per_sec": 0 00:13:56.730 }, 00:13:56.730 "claimed": true, 00:13:56.730 "claim_type": "exclusive_write", 00:13:56.730 "zoned": false, 00:13:56.730 "supported_io_types": { 00:13:56.730 "read": true, 00:13:56.730 "write": true, 00:13:56.730 "unmap": true, 00:13:56.730 "flush": true, 00:13:56.730 "reset": true, 00:13:56.730 "nvme_admin": false, 00:13:56.730 "nvme_io": false, 00:13:56.730 "nvme_io_md": false, 00:13:56.730 "write_zeroes": true, 00:13:56.730 "zcopy": true, 00:13:56.730 "get_zone_info": false, 00:13:56.730 "zone_management": false, 00:13:56.730 "zone_append": false, 00:13:56.730 "compare": false, 00:13:56.730 "compare_and_write": false, 00:13:56.730 "abort": true, 00:13:56.730 "seek_hole": false, 00:13:56.730 "seek_data": false, 00:13:56.730 "copy": true, 00:13:56.730 "nvme_iov_md": false 00:13:56.730 }, 00:13:56.730 "memory_domains": [ 00:13:56.730 { 00:13:56.730 "dma_device_id": "system", 00:13:56.730 "dma_device_type": 1 00:13:56.730 }, 00:13:56.730 { 00:13:56.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.730 "dma_device_type": 2 00:13:56.730 } 00:13:56.730 ], 00:13:56.730 "driver_specific": {} 00:13:56.730 } 00:13:56.730 ] 00:13:56.730 06:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:56.730 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:56.730 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:56.730 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:56.730 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:56.730 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:56.730 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:56.730 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.730 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.730 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.730 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.730 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.730 06:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.994 06:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:56.994 "name": "Existed_Raid", 00:13:56.994 "uuid": "a9bdefb2-9cf5-46a8-a280-b15e309c4e9c", 00:13:56.994 "strip_size_kb": 64, 00:13:56.994 "state": "configuring", 00:13:56.994 "raid_level": "raid0", 00:13:56.994 "superblock": true, 00:13:56.994 "num_base_bdevs": 4, 00:13:56.994 "num_base_bdevs_discovered": 3, 00:13:56.994 "num_base_bdevs_operational": 4, 00:13:56.994 "base_bdevs_list": [ 00:13:56.994 { 00:13:56.994 "name": "BaseBdev1", 00:13:56.994 "uuid": "2a96712b-c35b-452c-8b5c-bf08c5eb60f5", 00:13:56.994 "is_configured": true, 00:13:56.994 "data_offset": 2048, 00:13:56.994 "data_size": 63488 00:13:56.994 }, 00:13:56.994 { 00:13:56.994 "name": null, 00:13:56.994 "uuid": "15c1ba09-0b3c-47c9-85aa-19eac0f55ad4", 00:13:56.994 "is_configured": false, 00:13:56.994 "data_offset": 2048, 00:13:56.994 "data_size": 63488 00:13:56.994 }, 00:13:56.994 { 00:13:56.994 "name": "BaseBdev3", 00:13:56.994 "uuid": "2878295b-be35-485c-9da2-0b920bb23685", 00:13:56.994 "is_configured": true, 00:13:56.994 "data_offset": 2048, 00:13:56.994 "data_size": 63488 00:13:56.994 }, 00:13:56.994 { 00:13:56.994 "name": "BaseBdev4", 00:13:56.994 "uuid": "a053ef91-b9ab-4125-8194-3e0f55806b7d", 00:13:56.994 "is_configured": true, 00:13:56.994 "data_offset": 2048, 00:13:56.994 "data_size": 63488 00:13:56.994 } 00:13:56.994 ] 00:13:56.994 }' 00:13:56.994 06:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:56.994 06:45:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.562 06:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.562 06:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:57.822 06:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:57.823 06:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:57.823 [2024-08-14 06:45:25.036591] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:57.823 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:57.823 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:57.823 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:57.823 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:57.823 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:57.823 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:57.823 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:57.823 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:57.823 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:57.823 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:57.823 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.823 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.083 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:58.083 "name": "Existed_Raid", 00:13:58.083 "uuid": "a9bdefb2-9cf5-46a8-a280-b15e309c4e9c", 00:13:58.083 "strip_size_kb": 64, 00:13:58.083 "state": "configuring", 00:13:58.083 "raid_level": "raid0", 00:13:58.083 "superblock": true, 00:13:58.083 "num_base_bdevs": 4, 00:13:58.083 "num_base_bdevs_discovered": 2, 00:13:58.083 "num_base_bdevs_operational": 4, 00:13:58.083 "base_bdevs_list": [ 00:13:58.083 { 00:13:58.083 "name": "BaseBdev1", 00:13:58.083 "uuid": "2a96712b-c35b-452c-8b5c-bf08c5eb60f5", 00:13:58.083 "is_configured": true, 00:13:58.083 "data_offset": 2048, 00:13:58.083 "data_size": 63488 00:13:58.083 }, 00:13:58.083 { 00:13:58.083 "name": null, 00:13:58.083 "uuid": "15c1ba09-0b3c-47c9-85aa-19eac0f55ad4", 00:13:58.083 "is_configured": false, 00:13:58.083 "data_offset": 2048, 00:13:58.083 "data_size": 63488 00:13:58.083 }, 00:13:58.083 { 00:13:58.083 "name": null, 00:13:58.083 "uuid": "2878295b-be35-485c-9da2-0b920bb23685", 00:13:58.083 "is_configured": false, 00:13:58.083 "data_offset": 2048, 00:13:58.083 "data_size": 63488 00:13:58.083 }, 00:13:58.083 { 00:13:58.083 "name": "BaseBdev4", 00:13:58.083 "uuid": "a053ef91-b9ab-4125-8194-3e0f55806b7d", 00:13:58.083 "is_configured": true, 00:13:58.083 "data_offset": 2048, 00:13:58.083 "data_size": 63488 00:13:58.083 } 00:13:58.083 ] 00:13:58.083 }' 00:13:58.083 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:58.083 06:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.653 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.654 06:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:58.913 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:58.914 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:59.173 [2024-08-14 06:45:26.206812] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.173 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:59.173 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:59.173 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:59.173 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:59.173 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:59.173 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:59.173 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:59.173 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:59.173 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:59.173 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:59.173 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.173 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.173 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:59.173 "name": "Existed_Raid", 00:13:59.173 "uuid": "a9bdefb2-9cf5-46a8-a280-b15e309c4e9c", 00:13:59.173 "strip_size_kb": 64, 00:13:59.173 "state": "configuring", 00:13:59.173 "raid_level": "raid0", 00:13:59.173 "superblock": true, 00:13:59.173 "num_base_bdevs": 4, 00:13:59.173 "num_base_bdevs_discovered": 3, 00:13:59.173 "num_base_bdevs_operational": 4, 00:13:59.173 "base_bdevs_list": [ 00:13:59.173 { 00:13:59.173 "name": "BaseBdev1", 00:13:59.173 "uuid": "2a96712b-c35b-452c-8b5c-bf08c5eb60f5", 00:13:59.173 "is_configured": true, 00:13:59.173 "data_offset": 2048, 00:13:59.173 "data_size": 63488 00:13:59.173 }, 00:13:59.173 { 00:13:59.173 "name": null, 00:13:59.173 "uuid": "15c1ba09-0b3c-47c9-85aa-19eac0f55ad4", 00:13:59.173 "is_configured": false, 00:13:59.173 "data_offset": 2048, 00:13:59.173 "data_size": 63488 00:13:59.173 }, 00:13:59.173 { 00:13:59.173 "name": "BaseBdev3", 00:13:59.174 "uuid": "2878295b-be35-485c-9da2-0b920bb23685", 00:13:59.174 "is_configured": true, 00:13:59.174 "data_offset": 2048, 00:13:59.174 "data_size": 63488 00:13:59.174 }, 00:13:59.174 { 00:13:59.174 "name": "BaseBdev4", 00:13:59.174 "uuid": "a053ef91-b9ab-4125-8194-3e0f55806b7d", 00:13:59.174 "is_configured": true, 00:13:59.174 "data_offset": 2048, 00:13:59.174 "data_size": 63488 00:13:59.174 } 00:13:59.174 ] 00:13:59.174 }' 00:13:59.174 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:59.174 06:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.744 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:59.744 06:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.003 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:00.003 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:00.263 [2024-08-14 06:45:27.356859] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.263 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:00.263 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:00.263 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:00.263 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:00.263 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:00.263 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:00.263 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:00.263 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:00.263 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:00.263 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:00.263 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.263 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.523 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:00.523 "name": "Existed_Raid", 00:14:00.523 "uuid": "a9bdefb2-9cf5-46a8-a280-b15e309c4e9c", 00:14:00.523 "strip_size_kb": 64, 00:14:00.523 "state": "configuring", 00:14:00.523 "raid_level": "raid0", 00:14:00.523 "superblock": true, 00:14:00.523 "num_base_bdevs": 4, 00:14:00.523 "num_base_bdevs_discovered": 2, 00:14:00.523 "num_base_bdevs_operational": 4, 00:14:00.523 "base_bdevs_list": [ 00:14:00.523 { 00:14:00.523 "name": null, 00:14:00.523 "uuid": "2a96712b-c35b-452c-8b5c-bf08c5eb60f5", 00:14:00.523 "is_configured": false, 00:14:00.523 "data_offset": 2048, 00:14:00.523 "data_size": 63488 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "name": null, 00:14:00.523 "uuid": "15c1ba09-0b3c-47c9-85aa-19eac0f55ad4", 00:14:00.523 "is_configured": false, 00:14:00.523 "data_offset": 2048, 00:14:00.523 "data_size": 63488 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "name": "BaseBdev3", 00:14:00.523 "uuid": "2878295b-be35-485c-9da2-0b920bb23685", 00:14:00.523 "is_configured": true, 00:14:00.523 "data_offset": 2048, 00:14:00.523 "data_size": 63488 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "name": "BaseBdev4", 00:14:00.523 "uuid": "a053ef91-b9ab-4125-8194-3e0f55806b7d", 00:14:00.523 "is_configured": true, 00:14:00.523 "data_offset": 2048, 00:14:00.523 "data_size": 63488 00:14:00.523 } 00:14:00.523 ] 00:14:00.523 }' 00:14:00.523 06:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:00.523 06:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.092 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.092 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:01.092 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:01.092 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:01.351 [2024-08-14 06:45:28.522962] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.352 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:01.352 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.352 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:01.352 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:01.352 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:01.352 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:01.352 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.352 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.352 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.352 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.352 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.352 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.612 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.612 "name": "Existed_Raid", 00:14:01.612 "uuid": "a9bdefb2-9cf5-46a8-a280-b15e309c4e9c", 00:14:01.612 "strip_size_kb": 64, 00:14:01.612 "state": "configuring", 00:14:01.612 "raid_level": "raid0", 00:14:01.612 "superblock": true, 00:14:01.612 "num_base_bdevs": 4, 00:14:01.612 "num_base_bdevs_discovered": 3, 00:14:01.612 "num_base_bdevs_operational": 4, 00:14:01.612 "base_bdevs_list": [ 00:14:01.612 { 00:14:01.612 "name": null, 00:14:01.612 "uuid": "2a96712b-c35b-452c-8b5c-bf08c5eb60f5", 00:14:01.612 "is_configured": false, 00:14:01.612 "data_offset": 2048, 00:14:01.612 "data_size": 63488 00:14:01.612 }, 00:14:01.612 { 00:14:01.612 "name": "BaseBdev2", 00:14:01.612 "uuid": "15c1ba09-0b3c-47c9-85aa-19eac0f55ad4", 00:14:01.612 "is_configured": true, 00:14:01.612 "data_offset": 2048, 00:14:01.612 "data_size": 63488 00:14:01.612 }, 00:14:01.612 { 00:14:01.612 "name": "BaseBdev3", 00:14:01.612 "uuid": "2878295b-be35-485c-9da2-0b920bb23685", 00:14:01.612 "is_configured": true, 00:14:01.612 "data_offset": 2048, 00:14:01.612 "data_size": 63488 00:14:01.612 }, 00:14:01.612 { 00:14:01.612 "name": "BaseBdev4", 00:14:01.612 "uuid": "a053ef91-b9ab-4125-8194-3e0f55806b7d", 00:14:01.612 "is_configured": true, 00:14:01.612 "data_offset": 2048, 00:14:01.612 "data_size": 63488 00:14:01.612 } 00:14:01.612 ] 00:14:01.612 }' 00:14:01.612 06:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.612 06:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.181 06:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:02.181 06:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.441 06:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:02.441 06:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.441 06:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:02.441 06:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2a96712b-c35b-452c-8b5c-bf08c5eb60f5 00:14:02.700 [2024-08-14 06:45:29.834129] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:02.700 [2024-08-14 06:45:29.834395] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:02.700 [2024-08-14 06:45:29.834417] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:02.701 [2024-08-14 06:45:29.834734] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:14:02.701 [2024-08-14 06:45:29.834883] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:02.701 [2024-08-14 06:45:29.834897] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:02.701 [2024-08-14 06:45:29.835019] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.701 NewBaseBdev 00:14:02.701 06:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:02.701 06:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:14:02.701 06:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:02.701 06:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:02.701 06:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:02.701 06:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:02.701 06:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:02.960 06:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:02.960 [ 00:14:02.960 { 00:14:02.960 "name": "NewBaseBdev", 00:14:02.960 "aliases": [ 00:14:02.960 "2a96712b-c35b-452c-8b5c-bf08c5eb60f5" 00:14:02.960 ], 00:14:02.960 "product_name": "Malloc disk", 00:14:02.960 "block_size": 512, 00:14:02.960 "num_blocks": 65536, 00:14:02.960 "uuid": "2a96712b-c35b-452c-8b5c-bf08c5eb60f5", 00:14:02.960 "assigned_rate_limits": { 00:14:02.960 "rw_ios_per_sec": 0, 00:14:02.960 "rw_mbytes_per_sec": 0, 00:14:02.960 "r_mbytes_per_sec": 0, 00:14:02.960 "w_mbytes_per_sec": 0 00:14:02.960 }, 00:14:02.960 "claimed": true, 00:14:02.960 "claim_type": "exclusive_write", 00:14:02.960 "zoned": false, 00:14:02.960 "supported_io_types": { 00:14:02.960 "read": true, 00:14:02.960 "write": true, 00:14:02.960 "unmap": true, 00:14:02.960 "flush": true, 00:14:02.960 "reset": true, 00:14:02.960 "nvme_admin": false, 00:14:02.960 "nvme_io": false, 00:14:02.960 "nvme_io_md": false, 00:14:02.960 "write_zeroes": true, 00:14:02.960 "zcopy": true, 00:14:02.960 "get_zone_info": false, 00:14:02.960 "zone_management": false, 00:14:02.960 "zone_append": false, 00:14:02.960 "compare": false, 00:14:02.960 "compare_and_write": false, 00:14:02.960 "abort": true, 00:14:02.960 "seek_hole": false, 00:14:02.960 "seek_data": false, 00:14:02.960 "copy": true, 00:14:02.960 "nvme_iov_md": false 00:14:02.960 }, 00:14:02.960 "memory_domains": [ 00:14:02.960 { 00:14:02.960 "dma_device_id": "system", 00:14:02.960 "dma_device_type": 1 00:14:02.960 }, 00:14:02.960 { 00:14:02.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.960 "dma_device_type": 2 00:14:02.961 } 00:14:02.961 ], 00:14:02.961 "driver_specific": {} 00:14:02.961 } 00:14:02.961 ] 00:14:02.961 06:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:02.961 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:02.961 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:02.961 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:02.961 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:02.961 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:02.961 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:02.961 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:02.961 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:02.961 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:02.961 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:03.221 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.221 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.221 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:03.221 "name": "Existed_Raid", 00:14:03.221 "uuid": "a9bdefb2-9cf5-46a8-a280-b15e309c4e9c", 00:14:03.221 "strip_size_kb": 64, 00:14:03.221 "state": "online", 00:14:03.221 "raid_level": "raid0", 00:14:03.221 "superblock": true, 00:14:03.221 "num_base_bdevs": 4, 00:14:03.221 "num_base_bdevs_discovered": 4, 00:14:03.221 "num_base_bdevs_operational": 4, 00:14:03.221 "base_bdevs_list": [ 00:14:03.221 { 00:14:03.221 "name": "NewBaseBdev", 00:14:03.221 "uuid": "2a96712b-c35b-452c-8b5c-bf08c5eb60f5", 00:14:03.221 "is_configured": true, 00:14:03.221 "data_offset": 2048, 00:14:03.221 "data_size": 63488 00:14:03.221 }, 00:14:03.221 { 00:14:03.221 "name": "BaseBdev2", 00:14:03.221 "uuid": "15c1ba09-0b3c-47c9-85aa-19eac0f55ad4", 00:14:03.221 "is_configured": true, 00:14:03.221 "data_offset": 2048, 00:14:03.221 "data_size": 63488 00:14:03.221 }, 00:14:03.221 { 00:14:03.221 "name": "BaseBdev3", 00:14:03.221 "uuid": "2878295b-be35-485c-9da2-0b920bb23685", 00:14:03.221 "is_configured": true, 00:14:03.221 "data_offset": 2048, 00:14:03.221 "data_size": 63488 00:14:03.221 }, 00:14:03.221 { 00:14:03.221 "name": "BaseBdev4", 00:14:03.221 "uuid": "a053ef91-b9ab-4125-8194-3e0f55806b7d", 00:14:03.221 "is_configured": true, 00:14:03.221 "data_offset": 2048, 00:14:03.221 "data_size": 63488 00:14:03.221 } 00:14:03.221 ] 00:14:03.221 }' 00:14:03.221 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:03.221 06:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.791 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:03.791 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:03.791 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:03.791 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:03.791 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:03.791 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:03.791 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:03.791 06:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:04.051 [2024-08-14 06:45:31.160483] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.051 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:04.051 "name": "Existed_Raid", 00:14:04.051 "aliases": [ 00:14:04.051 "a9bdefb2-9cf5-46a8-a280-b15e309c4e9c" 00:14:04.051 ], 00:14:04.051 "product_name": "Raid Volume", 00:14:04.051 "block_size": 512, 00:14:04.051 "num_blocks": 253952, 00:14:04.051 "uuid": "a9bdefb2-9cf5-46a8-a280-b15e309c4e9c", 00:14:04.051 "assigned_rate_limits": { 00:14:04.051 "rw_ios_per_sec": 0, 00:14:04.051 "rw_mbytes_per_sec": 0, 00:14:04.051 "r_mbytes_per_sec": 0, 00:14:04.051 "w_mbytes_per_sec": 0 00:14:04.051 }, 00:14:04.051 "claimed": false, 00:14:04.051 "zoned": false, 00:14:04.051 "supported_io_types": { 00:14:04.051 "read": true, 00:14:04.051 "write": true, 00:14:04.051 "unmap": true, 00:14:04.051 "flush": true, 00:14:04.051 "reset": true, 00:14:04.051 "nvme_admin": false, 00:14:04.051 "nvme_io": false, 00:14:04.051 "nvme_io_md": false, 00:14:04.051 "write_zeroes": true, 00:14:04.051 "zcopy": false, 00:14:04.051 "get_zone_info": false, 00:14:04.051 "zone_management": false, 00:14:04.051 "zone_append": false, 00:14:04.051 "compare": false, 00:14:04.051 "compare_and_write": false, 00:14:04.051 "abort": false, 00:14:04.051 "seek_hole": false, 00:14:04.051 "seek_data": false, 00:14:04.051 "copy": false, 00:14:04.051 "nvme_iov_md": false 00:14:04.051 }, 00:14:04.051 "memory_domains": [ 00:14:04.051 { 00:14:04.051 "dma_device_id": "system", 00:14:04.051 "dma_device_type": 1 00:14:04.051 }, 00:14:04.051 { 00:14:04.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.051 "dma_device_type": 2 00:14:04.051 }, 00:14:04.051 { 00:14:04.051 "dma_device_id": "system", 00:14:04.051 "dma_device_type": 1 00:14:04.051 }, 00:14:04.051 { 00:14:04.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.051 "dma_device_type": 2 00:14:04.051 }, 00:14:04.051 { 00:14:04.051 "dma_device_id": "system", 00:14:04.051 "dma_device_type": 1 00:14:04.051 }, 00:14:04.051 { 00:14:04.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.051 "dma_device_type": 2 00:14:04.051 }, 00:14:04.051 { 00:14:04.051 "dma_device_id": "system", 00:14:04.051 "dma_device_type": 1 00:14:04.051 }, 00:14:04.051 { 00:14:04.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.051 "dma_device_type": 2 00:14:04.051 } 00:14:04.051 ], 00:14:04.051 "driver_specific": { 00:14:04.051 "raid": { 00:14:04.051 "uuid": "a9bdefb2-9cf5-46a8-a280-b15e309c4e9c", 00:14:04.051 "strip_size_kb": 64, 00:14:04.051 "state": "online", 00:14:04.051 "raid_level": "raid0", 00:14:04.051 "superblock": true, 00:14:04.051 "num_base_bdevs": 4, 00:14:04.051 "num_base_bdevs_discovered": 4, 00:14:04.051 "num_base_bdevs_operational": 4, 00:14:04.051 "base_bdevs_list": [ 00:14:04.051 { 00:14:04.051 "name": "NewBaseBdev", 00:14:04.051 "uuid": "2a96712b-c35b-452c-8b5c-bf08c5eb60f5", 00:14:04.051 "is_configured": true, 00:14:04.051 "data_offset": 2048, 00:14:04.051 "data_size": 63488 00:14:04.051 }, 00:14:04.051 { 00:14:04.051 "name": "BaseBdev2", 00:14:04.051 "uuid": "15c1ba09-0b3c-47c9-85aa-19eac0f55ad4", 00:14:04.051 "is_configured": true, 00:14:04.051 "data_offset": 2048, 00:14:04.051 "data_size": 63488 00:14:04.051 }, 00:14:04.051 { 00:14:04.051 "name": "BaseBdev3", 00:14:04.051 "uuid": "2878295b-be35-485c-9da2-0b920bb23685", 00:14:04.051 "is_configured": true, 00:14:04.051 "data_offset": 2048, 00:14:04.051 "data_size": 63488 00:14:04.051 }, 00:14:04.051 { 00:14:04.051 "name": "BaseBdev4", 00:14:04.051 "uuid": "a053ef91-b9ab-4125-8194-3e0f55806b7d", 00:14:04.051 "is_configured": true, 00:14:04.051 "data_offset": 2048, 00:14:04.051 "data_size": 63488 00:14:04.051 } 00:14:04.051 ] 00:14:04.051 } 00:14:04.051 } 00:14:04.051 }' 00:14:04.051 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:04.051 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:04.051 BaseBdev2 00:14:04.051 BaseBdev3 00:14:04.051 BaseBdev4' 00:14:04.051 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:04.051 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:04.051 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:04.312 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:04.312 "name": "NewBaseBdev", 00:14:04.312 "aliases": [ 00:14:04.312 "2a96712b-c35b-452c-8b5c-bf08c5eb60f5" 00:14:04.312 ], 00:14:04.312 "product_name": "Malloc disk", 00:14:04.312 "block_size": 512, 00:14:04.312 "num_blocks": 65536, 00:14:04.312 "uuid": "2a96712b-c35b-452c-8b5c-bf08c5eb60f5", 00:14:04.312 "assigned_rate_limits": { 00:14:04.312 "rw_ios_per_sec": 0, 00:14:04.312 "rw_mbytes_per_sec": 0, 00:14:04.312 "r_mbytes_per_sec": 0, 00:14:04.312 "w_mbytes_per_sec": 0 00:14:04.312 }, 00:14:04.312 "claimed": true, 00:14:04.312 "claim_type": "exclusive_write", 00:14:04.312 "zoned": false, 00:14:04.312 "supported_io_types": { 00:14:04.312 "read": true, 00:14:04.312 "write": true, 00:14:04.312 "unmap": true, 00:14:04.312 "flush": true, 00:14:04.312 "reset": true, 00:14:04.312 "nvme_admin": false, 00:14:04.312 "nvme_io": false, 00:14:04.312 "nvme_io_md": false, 00:14:04.312 "write_zeroes": true, 00:14:04.312 "zcopy": true, 00:14:04.312 "get_zone_info": false, 00:14:04.312 "zone_management": false, 00:14:04.312 "zone_append": false, 00:14:04.312 "compare": false, 00:14:04.312 "compare_and_write": false, 00:14:04.312 "abort": true, 00:14:04.312 "seek_hole": false, 00:14:04.312 "seek_data": false, 00:14:04.312 "copy": true, 00:14:04.312 "nvme_iov_md": false 00:14:04.312 }, 00:14:04.312 "memory_domains": [ 00:14:04.312 { 00:14:04.312 "dma_device_id": "system", 00:14:04.312 "dma_device_type": 1 00:14:04.312 }, 00:14:04.312 { 00:14:04.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.312 "dma_device_type": 2 00:14:04.312 } 00:14:04.312 ], 00:14:04.312 "driver_specific": {} 00:14:04.312 }' 00:14:04.312 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:04.312 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:04.312 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:04.312 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:04.312 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:04.573 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:04.573 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:04.573 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:04.573 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:04.573 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:04.573 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:04.573 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:04.573 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:04.573 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:04.573 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:04.833 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:04.833 "name": "BaseBdev2", 00:14:04.833 "aliases": [ 00:14:04.833 "15c1ba09-0b3c-47c9-85aa-19eac0f55ad4" 00:14:04.833 ], 00:14:04.833 "product_name": "Malloc disk", 00:14:04.833 "block_size": 512, 00:14:04.833 "num_blocks": 65536, 00:14:04.833 "uuid": "15c1ba09-0b3c-47c9-85aa-19eac0f55ad4", 00:14:04.833 "assigned_rate_limits": { 00:14:04.833 "rw_ios_per_sec": 0, 00:14:04.833 "rw_mbytes_per_sec": 0, 00:14:04.833 "r_mbytes_per_sec": 0, 00:14:04.833 "w_mbytes_per_sec": 0 00:14:04.833 }, 00:14:04.833 "claimed": true, 00:14:04.833 "claim_type": "exclusive_write", 00:14:04.833 "zoned": false, 00:14:04.833 "supported_io_types": { 00:14:04.833 "read": true, 00:14:04.833 "write": true, 00:14:04.833 "unmap": true, 00:14:04.833 "flush": true, 00:14:04.833 "reset": true, 00:14:04.833 "nvme_admin": false, 00:14:04.833 "nvme_io": false, 00:14:04.833 "nvme_io_md": false, 00:14:04.833 "write_zeroes": true, 00:14:04.833 "zcopy": true, 00:14:04.833 "get_zone_info": false, 00:14:04.833 "zone_management": false, 00:14:04.833 "zone_append": false, 00:14:04.833 "compare": false, 00:14:04.833 "compare_and_write": false, 00:14:04.833 "abort": true, 00:14:04.833 "seek_hole": false, 00:14:04.833 "seek_data": false, 00:14:04.833 "copy": true, 00:14:04.833 "nvme_iov_md": false 00:14:04.833 }, 00:14:04.833 "memory_domains": [ 00:14:04.833 { 00:14:04.833 "dma_device_id": "system", 00:14:04.833 "dma_device_type": 1 00:14:04.833 }, 00:14:04.833 { 00:14:04.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.833 "dma_device_type": 2 00:14:04.833 } 00:14:04.833 ], 00:14:04.833 "driver_specific": {} 00:14:04.833 }' 00:14:04.833 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:04.833 06:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:04.833 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:04.833 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:04.833 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:05.094 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:05.094 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:05.094 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:05.094 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:05.094 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.094 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.094 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:05.094 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:05.094 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:05.094 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:05.353 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:05.353 "name": "BaseBdev3", 00:14:05.353 "aliases": [ 00:14:05.353 "2878295b-be35-485c-9da2-0b920bb23685" 00:14:05.353 ], 00:14:05.353 "product_name": "Malloc disk", 00:14:05.353 "block_size": 512, 00:14:05.353 "num_blocks": 65536, 00:14:05.353 "uuid": "2878295b-be35-485c-9da2-0b920bb23685", 00:14:05.353 "assigned_rate_limits": { 00:14:05.353 "rw_ios_per_sec": 0, 00:14:05.353 "rw_mbytes_per_sec": 0, 00:14:05.353 "r_mbytes_per_sec": 0, 00:14:05.353 "w_mbytes_per_sec": 0 00:14:05.353 }, 00:14:05.353 "claimed": true, 00:14:05.353 "claim_type": "exclusive_write", 00:14:05.353 "zoned": false, 00:14:05.353 "supported_io_types": { 00:14:05.353 "read": true, 00:14:05.353 "write": true, 00:14:05.353 "unmap": true, 00:14:05.353 "flush": true, 00:14:05.353 "reset": true, 00:14:05.353 "nvme_admin": false, 00:14:05.353 "nvme_io": false, 00:14:05.353 "nvme_io_md": false, 00:14:05.353 "write_zeroes": true, 00:14:05.353 "zcopy": true, 00:14:05.353 "get_zone_info": false, 00:14:05.353 "zone_management": false, 00:14:05.353 "zone_append": false, 00:14:05.353 "compare": false, 00:14:05.353 "compare_and_write": false, 00:14:05.353 "abort": true, 00:14:05.353 "seek_hole": false, 00:14:05.353 "seek_data": false, 00:14:05.353 "copy": true, 00:14:05.353 "nvme_iov_md": false 00:14:05.353 }, 00:14:05.353 "memory_domains": [ 00:14:05.353 { 00:14:05.353 "dma_device_id": "system", 00:14:05.353 "dma_device_type": 1 00:14:05.353 }, 00:14:05.353 { 00:14:05.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.353 "dma_device_type": 2 00:14:05.353 } 00:14:05.353 ], 00:14:05.353 "driver_specific": {} 00:14:05.353 }' 00:14:05.353 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:05.353 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:05.353 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:05.353 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:05.353 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:05.613 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:05.613 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:05.613 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:05.613 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:05.613 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.613 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.613 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:05.613 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:05.613 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:05.613 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:05.872 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:05.872 "name": "BaseBdev4", 00:14:05.872 "aliases": [ 00:14:05.872 "a053ef91-b9ab-4125-8194-3e0f55806b7d" 00:14:05.872 ], 00:14:05.872 "product_name": "Malloc disk", 00:14:05.872 "block_size": 512, 00:14:05.872 "num_blocks": 65536, 00:14:05.872 "uuid": "a053ef91-b9ab-4125-8194-3e0f55806b7d", 00:14:05.872 "assigned_rate_limits": { 00:14:05.872 "rw_ios_per_sec": 0, 00:14:05.872 "rw_mbytes_per_sec": 0, 00:14:05.872 "r_mbytes_per_sec": 0, 00:14:05.872 "w_mbytes_per_sec": 0 00:14:05.872 }, 00:14:05.872 "claimed": true, 00:14:05.872 "claim_type": "exclusive_write", 00:14:05.872 "zoned": false, 00:14:05.872 "supported_io_types": { 00:14:05.872 "read": true, 00:14:05.872 "write": true, 00:14:05.872 "unmap": true, 00:14:05.872 "flush": true, 00:14:05.872 "reset": true, 00:14:05.872 "nvme_admin": false, 00:14:05.872 "nvme_io": false, 00:14:05.872 "nvme_io_md": false, 00:14:05.872 "write_zeroes": true, 00:14:05.872 "zcopy": true, 00:14:05.872 "get_zone_info": false, 00:14:05.872 "zone_management": false, 00:14:05.872 "zone_append": false, 00:14:05.872 "compare": false, 00:14:05.872 "compare_and_write": false, 00:14:05.872 "abort": true, 00:14:05.872 "seek_hole": false, 00:14:05.872 "seek_data": false, 00:14:05.872 "copy": true, 00:14:05.872 "nvme_iov_md": false 00:14:05.873 }, 00:14:05.873 "memory_domains": [ 00:14:05.873 { 00:14:05.873 "dma_device_id": "system", 00:14:05.873 "dma_device_type": 1 00:14:05.873 }, 00:14:05.873 { 00:14:05.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.873 "dma_device_type": 2 00:14:05.873 } 00:14:05.873 ], 00:14:05.873 "driver_specific": {} 00:14:05.873 }' 00:14:05.873 06:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:05.873 06:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:05.873 06:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:05.873 06:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:06.133 06:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:06.133 06:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:06.133 06:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:06.133 06:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:06.133 06:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:06.133 06:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:06.133 06:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:06.133 06:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:06.133 06:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:06.393 [2024-08-14 06:45:33.504335] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:06.393 [2024-08-14 06:45:33.504388] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.393 [2024-08-14 06:45:33.504509] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.393 [2024-08-14 06:45:33.504587] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.393 [2024-08-14 06:45:33.504602] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:14:06.393 06:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 84553 00:14:06.393 06:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 84553 ']' 00:14:06.393 06:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 84553 00:14:06.393 06:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:14:06.393 06:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:06.393 06:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84553 00:14:06.393 killing process with pid 84553 00:14:06.393 06:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:06.393 06:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:06.393 06:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84553' 00:14:06.393 06:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 84553 00:14:06.393 [2024-08-14 06:45:33.558458] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:06.393 06:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 84553 00:14:06.393 [2024-08-14 06:45:33.636698] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:06.961 06:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:06.961 00:14:06.961 real 0m27.767s 00:14:06.961 user 0m51.541s 00:14:06.961 sys 0m4.123s 00:14:06.961 06:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:06.961 06:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.961 ************************************ 00:14:06.961 END TEST raid_state_function_test_sb 00:14:06.961 ************************************ 00:14:06.961 06:45:34 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:14:06.961 06:45:34 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:06.961 06:45:34 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:06.961 06:45:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:06.961 ************************************ 00:14:06.961 START TEST raid_superblock_test 00:14:06.961 ************************************ 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 4 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=85557 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 85557 /var/tmp/spdk-raid.sock 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 85557 ']' 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:06.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:06.961 06:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.961 [2024-08-14 06:45:34.161863] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:14:06.961 [2024-08-14 06:45:34.161970] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85557 ] 00:14:07.222 [2024-08-14 06:45:34.293864] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.222 [2024-08-14 06:45:34.375518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.222 [2024-08-14 06:45:34.454478] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.222 [2024-08-14 06:45:34.454525] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.160 06:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:08.160 06:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:14:08.160 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:14:08.160 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:08.160 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:14:08.160 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:14:08.160 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:08.160 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:08.160 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:08.160 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:08.160 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:08.160 malloc1 00:14:08.160 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:08.420 [2024-08-14 06:45:35.452280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:08.420 [2024-08-14 06:45:35.452388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.420 [2024-08-14 06:45:35.452428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:08.420 [2024-08-14 06:45:35.452439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.420 [2024-08-14 06:45:35.455155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.420 [2024-08-14 06:45:35.455200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:08.420 pt1 00:14:08.420 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:08.420 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:08.420 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:14:08.420 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:14:08.420 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:08.420 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:08.420 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:08.420 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:08.420 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:08.420 malloc2 00:14:08.420 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:08.681 [2024-08-14 06:45:35.834860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:08.681 [2024-08-14 06:45:35.834959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.681 [2024-08-14 06:45:35.834988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:08.681 [2024-08-14 06:45:35.834999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.681 [2024-08-14 06:45:35.837608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.681 [2024-08-14 06:45:35.837640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:08.681 pt2 00:14:08.681 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:08.681 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:08.681 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:14:08.681 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:14:08.681 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:08.681 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:08.681 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:08.681 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:08.681 06:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:08.940 malloc3 00:14:08.940 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:09.200 [2024-08-14 06:45:36.257324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:09.200 [2024-08-14 06:45:36.257429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.200 [2024-08-14 06:45:36.257462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:09.200 [2024-08-14 06:45:36.257474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.200 [2024-08-14 06:45:36.260104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.200 [2024-08-14 06:45:36.260139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:09.200 pt3 00:14:09.200 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:09.200 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:09.200 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:14:09.200 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:14:09.200 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:09.200 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:09.200 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:09.200 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:09.200 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:14:09.460 malloc4 00:14:09.460 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:09.460 [2024-08-14 06:45:36.671750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:09.460 [2024-08-14 06:45:36.671846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.460 [2024-08-14 06:45:36.671872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:09.460 [2024-08-14 06:45:36.671881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.460 [2024-08-14 06:45:36.674479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.460 [2024-08-14 06:45:36.674519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:09.460 pt4 00:14:09.460 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:09.460 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:09.460 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:14:09.720 [2024-08-14 06:45:36.855492] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:09.720 [2024-08-14 06:45:36.857745] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:09.720 [2024-08-14 06:45:36.857830] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:09.720 [2024-08-14 06:45:36.857875] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:09.720 [2024-08-14 06:45:36.858071] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:09.720 [2024-08-14 06:45:36.858096] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:09.720 [2024-08-14 06:45:36.858466] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:09.720 [2024-08-14 06:45:36.858650] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:09.720 [2024-08-14 06:45:36.858670] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:09.720 [2024-08-14 06:45:36.858858] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.720 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:09.720 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:09.720 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:09.720 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:09.720 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:09.720 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:09.720 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.720 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.720 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.720 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.720 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.720 06:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.980 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.980 "name": "raid_bdev1", 00:14:09.980 "uuid": "d4f68f34-9ed4-4aec-8d6d-15bbd4698efb", 00:14:09.980 "strip_size_kb": 64, 00:14:09.980 "state": "online", 00:14:09.980 "raid_level": "raid0", 00:14:09.980 "superblock": true, 00:14:09.980 "num_base_bdevs": 4, 00:14:09.980 "num_base_bdevs_discovered": 4, 00:14:09.980 "num_base_bdevs_operational": 4, 00:14:09.980 "base_bdevs_list": [ 00:14:09.980 { 00:14:09.980 "name": "pt1", 00:14:09.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:09.980 "is_configured": true, 00:14:09.980 "data_offset": 2048, 00:14:09.980 "data_size": 63488 00:14:09.980 }, 00:14:09.980 { 00:14:09.980 "name": "pt2", 00:14:09.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:09.980 "is_configured": true, 00:14:09.980 "data_offset": 2048, 00:14:09.980 "data_size": 63488 00:14:09.980 }, 00:14:09.980 { 00:14:09.980 "name": "pt3", 00:14:09.980 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:09.980 "is_configured": true, 00:14:09.980 "data_offset": 2048, 00:14:09.980 "data_size": 63488 00:14:09.980 }, 00:14:09.980 { 00:14:09.980 "name": "pt4", 00:14:09.980 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:09.980 "is_configured": true, 00:14:09.980 "data_offset": 2048, 00:14:09.980 "data_size": 63488 00:14:09.980 } 00:14:09.980 ] 00:14:09.980 }' 00:14:09.980 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.980 06:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.551 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:14:10.551 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:10.551 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:10.551 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:10.551 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:10.551 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:10.551 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:10.551 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:10.551 [2024-08-14 06:45:37.798200] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.811 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:10.811 "name": "raid_bdev1", 00:14:10.811 "aliases": [ 00:14:10.811 "d4f68f34-9ed4-4aec-8d6d-15bbd4698efb" 00:14:10.811 ], 00:14:10.811 "product_name": "Raid Volume", 00:14:10.811 "block_size": 512, 00:14:10.811 "num_blocks": 253952, 00:14:10.811 "uuid": "d4f68f34-9ed4-4aec-8d6d-15bbd4698efb", 00:14:10.811 "assigned_rate_limits": { 00:14:10.811 "rw_ios_per_sec": 0, 00:14:10.811 "rw_mbytes_per_sec": 0, 00:14:10.811 "r_mbytes_per_sec": 0, 00:14:10.811 "w_mbytes_per_sec": 0 00:14:10.811 }, 00:14:10.811 "claimed": false, 00:14:10.811 "zoned": false, 00:14:10.811 "supported_io_types": { 00:14:10.811 "read": true, 00:14:10.811 "write": true, 00:14:10.811 "unmap": true, 00:14:10.811 "flush": true, 00:14:10.811 "reset": true, 00:14:10.811 "nvme_admin": false, 00:14:10.811 "nvme_io": false, 00:14:10.811 "nvme_io_md": false, 00:14:10.811 "write_zeroes": true, 00:14:10.811 "zcopy": false, 00:14:10.811 "get_zone_info": false, 00:14:10.811 "zone_management": false, 00:14:10.811 "zone_append": false, 00:14:10.811 "compare": false, 00:14:10.811 "compare_and_write": false, 00:14:10.811 "abort": false, 00:14:10.811 "seek_hole": false, 00:14:10.811 "seek_data": false, 00:14:10.811 "copy": false, 00:14:10.811 "nvme_iov_md": false 00:14:10.811 }, 00:14:10.811 "memory_domains": [ 00:14:10.811 { 00:14:10.811 "dma_device_id": "system", 00:14:10.811 "dma_device_type": 1 00:14:10.811 }, 00:14:10.811 { 00:14:10.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.811 "dma_device_type": 2 00:14:10.811 }, 00:14:10.811 { 00:14:10.811 "dma_device_id": "system", 00:14:10.811 "dma_device_type": 1 00:14:10.811 }, 00:14:10.811 { 00:14:10.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.811 "dma_device_type": 2 00:14:10.811 }, 00:14:10.811 { 00:14:10.811 "dma_device_id": "system", 00:14:10.811 "dma_device_type": 1 00:14:10.811 }, 00:14:10.811 { 00:14:10.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.811 "dma_device_type": 2 00:14:10.811 }, 00:14:10.811 { 00:14:10.811 "dma_device_id": "system", 00:14:10.811 "dma_device_type": 1 00:14:10.811 }, 00:14:10.811 { 00:14:10.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.811 "dma_device_type": 2 00:14:10.811 } 00:14:10.811 ], 00:14:10.811 "driver_specific": { 00:14:10.811 "raid": { 00:14:10.811 "uuid": "d4f68f34-9ed4-4aec-8d6d-15bbd4698efb", 00:14:10.811 "strip_size_kb": 64, 00:14:10.811 "state": "online", 00:14:10.811 "raid_level": "raid0", 00:14:10.811 "superblock": true, 00:14:10.811 "num_base_bdevs": 4, 00:14:10.811 "num_base_bdevs_discovered": 4, 00:14:10.811 "num_base_bdevs_operational": 4, 00:14:10.811 "base_bdevs_list": [ 00:14:10.811 { 00:14:10.811 "name": "pt1", 00:14:10.811 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:10.811 "is_configured": true, 00:14:10.811 "data_offset": 2048, 00:14:10.812 "data_size": 63488 00:14:10.812 }, 00:14:10.812 { 00:14:10.812 "name": "pt2", 00:14:10.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:10.812 "is_configured": true, 00:14:10.812 "data_offset": 2048, 00:14:10.812 "data_size": 63488 00:14:10.812 }, 00:14:10.812 { 00:14:10.812 "name": "pt3", 00:14:10.812 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:10.812 "is_configured": true, 00:14:10.812 "data_offset": 2048, 00:14:10.812 "data_size": 63488 00:14:10.812 }, 00:14:10.812 { 00:14:10.812 "name": "pt4", 00:14:10.812 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:10.812 "is_configured": true, 00:14:10.812 "data_offset": 2048, 00:14:10.812 "data_size": 63488 00:14:10.812 } 00:14:10.812 ] 00:14:10.812 } 00:14:10.812 } 00:14:10.812 }' 00:14:10.812 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:10.812 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:10.812 pt2 00:14:10.812 pt3 00:14:10.812 pt4' 00:14:10.812 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:10.812 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:10.812 06:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:10.812 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:10.812 "name": "pt1", 00:14:10.812 "aliases": [ 00:14:10.812 "00000000-0000-0000-0000-000000000001" 00:14:10.812 ], 00:14:10.812 "product_name": "passthru", 00:14:10.812 "block_size": 512, 00:14:10.812 "num_blocks": 65536, 00:14:10.812 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:10.812 "assigned_rate_limits": { 00:14:10.812 "rw_ios_per_sec": 0, 00:14:10.812 "rw_mbytes_per_sec": 0, 00:14:10.812 "r_mbytes_per_sec": 0, 00:14:10.812 "w_mbytes_per_sec": 0 00:14:10.812 }, 00:14:10.812 "claimed": true, 00:14:10.812 "claim_type": "exclusive_write", 00:14:10.812 "zoned": false, 00:14:10.812 "supported_io_types": { 00:14:10.812 "read": true, 00:14:10.812 "write": true, 00:14:10.812 "unmap": true, 00:14:10.812 "flush": true, 00:14:10.812 "reset": true, 00:14:10.812 "nvme_admin": false, 00:14:10.812 "nvme_io": false, 00:14:10.812 "nvme_io_md": false, 00:14:10.812 "write_zeroes": true, 00:14:10.812 "zcopy": true, 00:14:10.812 "get_zone_info": false, 00:14:10.812 "zone_management": false, 00:14:10.812 "zone_append": false, 00:14:10.812 "compare": false, 00:14:10.812 "compare_and_write": false, 00:14:10.812 "abort": true, 00:14:10.812 "seek_hole": false, 00:14:10.812 "seek_data": false, 00:14:10.812 "copy": true, 00:14:10.812 "nvme_iov_md": false 00:14:10.812 }, 00:14:10.812 "memory_domains": [ 00:14:10.812 { 00:14:10.812 "dma_device_id": "system", 00:14:10.812 "dma_device_type": 1 00:14:10.812 }, 00:14:10.812 { 00:14:10.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.812 "dma_device_type": 2 00:14:10.812 } 00:14:10.812 ], 00:14:10.812 "driver_specific": { 00:14:10.812 "passthru": { 00:14:10.812 "name": "pt1", 00:14:10.812 "base_bdev_name": "malloc1" 00:14:10.812 } 00:14:10.812 } 00:14:10.812 }' 00:14:10.812 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.072 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.072 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:11.072 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.072 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.072 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:11.072 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.072 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.072 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:11.072 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.332 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.332 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:11.332 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:11.332 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:11.332 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:11.332 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:11.332 "name": "pt2", 00:14:11.332 "aliases": [ 00:14:11.332 "00000000-0000-0000-0000-000000000002" 00:14:11.332 ], 00:14:11.332 "product_name": "passthru", 00:14:11.332 "block_size": 512, 00:14:11.332 "num_blocks": 65536, 00:14:11.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:11.332 "assigned_rate_limits": { 00:14:11.332 "rw_ios_per_sec": 0, 00:14:11.332 "rw_mbytes_per_sec": 0, 00:14:11.332 "r_mbytes_per_sec": 0, 00:14:11.332 "w_mbytes_per_sec": 0 00:14:11.332 }, 00:14:11.333 "claimed": true, 00:14:11.333 "claim_type": "exclusive_write", 00:14:11.333 "zoned": false, 00:14:11.333 "supported_io_types": { 00:14:11.333 "read": true, 00:14:11.333 "write": true, 00:14:11.333 "unmap": true, 00:14:11.333 "flush": true, 00:14:11.333 "reset": true, 00:14:11.333 "nvme_admin": false, 00:14:11.333 "nvme_io": false, 00:14:11.333 "nvme_io_md": false, 00:14:11.333 "write_zeroes": true, 00:14:11.333 "zcopy": true, 00:14:11.333 "get_zone_info": false, 00:14:11.333 "zone_management": false, 00:14:11.333 "zone_append": false, 00:14:11.333 "compare": false, 00:14:11.333 "compare_and_write": false, 00:14:11.333 "abort": true, 00:14:11.333 "seek_hole": false, 00:14:11.333 "seek_data": false, 00:14:11.333 "copy": true, 00:14:11.333 "nvme_iov_md": false 00:14:11.333 }, 00:14:11.333 "memory_domains": [ 00:14:11.333 { 00:14:11.333 "dma_device_id": "system", 00:14:11.333 "dma_device_type": 1 00:14:11.333 }, 00:14:11.333 { 00:14:11.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.333 "dma_device_type": 2 00:14:11.333 } 00:14:11.333 ], 00:14:11.333 "driver_specific": { 00:14:11.333 "passthru": { 00:14:11.333 "name": "pt2", 00:14:11.333 "base_bdev_name": "malloc2" 00:14:11.333 } 00:14:11.333 } 00:14:11.333 }' 00:14:11.333 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.593 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.593 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:11.593 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.593 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.593 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:11.593 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.593 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.593 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:11.593 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.593 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.853 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:11.853 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:11.853 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:11.853 06:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:11.853 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:11.853 "name": "pt3", 00:14:11.853 "aliases": [ 00:14:11.853 "00000000-0000-0000-0000-000000000003" 00:14:11.853 ], 00:14:11.853 "product_name": "passthru", 00:14:11.853 "block_size": 512, 00:14:11.853 "num_blocks": 65536, 00:14:11.853 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:11.853 "assigned_rate_limits": { 00:14:11.853 "rw_ios_per_sec": 0, 00:14:11.853 "rw_mbytes_per_sec": 0, 00:14:11.853 "r_mbytes_per_sec": 0, 00:14:11.853 "w_mbytes_per_sec": 0 00:14:11.853 }, 00:14:11.853 "claimed": true, 00:14:11.853 "claim_type": "exclusive_write", 00:14:11.853 "zoned": false, 00:14:11.853 "supported_io_types": { 00:14:11.853 "read": true, 00:14:11.853 "write": true, 00:14:11.853 "unmap": true, 00:14:11.853 "flush": true, 00:14:11.853 "reset": true, 00:14:11.853 "nvme_admin": false, 00:14:11.853 "nvme_io": false, 00:14:11.853 "nvme_io_md": false, 00:14:11.853 "write_zeroes": true, 00:14:11.853 "zcopy": true, 00:14:11.853 "get_zone_info": false, 00:14:11.853 "zone_management": false, 00:14:11.853 "zone_append": false, 00:14:11.853 "compare": false, 00:14:11.853 "compare_and_write": false, 00:14:11.853 "abort": true, 00:14:11.853 "seek_hole": false, 00:14:11.853 "seek_data": false, 00:14:11.853 "copy": true, 00:14:11.853 "nvme_iov_md": false 00:14:11.853 }, 00:14:11.853 "memory_domains": [ 00:14:11.853 { 00:14:11.853 "dma_device_id": "system", 00:14:11.853 "dma_device_type": 1 00:14:11.853 }, 00:14:11.853 { 00:14:11.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.853 "dma_device_type": 2 00:14:11.853 } 00:14:11.853 ], 00:14:11.853 "driver_specific": { 00:14:11.853 "passthru": { 00:14:11.853 "name": "pt3", 00:14:11.853 "base_bdev_name": "malloc3" 00:14:11.853 } 00:14:11.853 } 00:14:11.853 }' 00:14:11.853 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:12.114 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:12.114 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:12.114 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:12.114 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:12.114 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:12.114 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:12.114 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:12.114 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:12.114 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:12.374 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:12.374 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:12.374 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:12.374 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:14:12.374 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:12.374 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:12.374 "name": "pt4", 00:14:12.374 "aliases": [ 00:14:12.374 "00000000-0000-0000-0000-000000000004" 00:14:12.374 ], 00:14:12.374 "product_name": "passthru", 00:14:12.374 "block_size": 512, 00:14:12.374 "num_blocks": 65536, 00:14:12.374 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:12.374 "assigned_rate_limits": { 00:14:12.374 "rw_ios_per_sec": 0, 00:14:12.374 "rw_mbytes_per_sec": 0, 00:14:12.374 "r_mbytes_per_sec": 0, 00:14:12.374 "w_mbytes_per_sec": 0 00:14:12.374 }, 00:14:12.374 "claimed": true, 00:14:12.374 "claim_type": "exclusive_write", 00:14:12.374 "zoned": false, 00:14:12.374 "supported_io_types": { 00:14:12.374 "read": true, 00:14:12.374 "write": true, 00:14:12.374 "unmap": true, 00:14:12.374 "flush": true, 00:14:12.374 "reset": true, 00:14:12.374 "nvme_admin": false, 00:14:12.374 "nvme_io": false, 00:14:12.374 "nvme_io_md": false, 00:14:12.374 "write_zeroes": true, 00:14:12.374 "zcopy": true, 00:14:12.374 "get_zone_info": false, 00:14:12.374 "zone_management": false, 00:14:12.374 "zone_append": false, 00:14:12.374 "compare": false, 00:14:12.374 "compare_and_write": false, 00:14:12.374 "abort": true, 00:14:12.374 "seek_hole": false, 00:14:12.374 "seek_data": false, 00:14:12.374 "copy": true, 00:14:12.374 "nvme_iov_md": false 00:14:12.374 }, 00:14:12.374 "memory_domains": [ 00:14:12.374 { 00:14:12.374 "dma_device_id": "system", 00:14:12.374 "dma_device_type": 1 00:14:12.374 }, 00:14:12.374 { 00:14:12.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.374 "dma_device_type": 2 00:14:12.374 } 00:14:12.374 ], 00:14:12.374 "driver_specific": { 00:14:12.374 "passthru": { 00:14:12.374 "name": "pt4", 00:14:12.374 "base_bdev_name": "malloc4" 00:14:12.374 } 00:14:12.374 } 00:14:12.374 }' 00:14:12.374 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:12.634 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:12.634 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:12.634 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:12.634 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:12.634 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:12.635 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:12.635 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:12.635 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:12.635 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:12.635 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:12.894 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:12.894 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:12.894 06:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:14:12.894 [2024-08-14 06:45:40.079387] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.894 06:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=d4f68f34-9ed4-4aec-8d6d-15bbd4698efb 00:14:12.894 06:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z d4f68f34-9ed4-4aec-8d6d-15bbd4698efb ']' 00:14:12.894 06:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:13.155 [2024-08-14 06:45:40.282689] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:13.155 [2024-08-14 06:45:40.282749] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.155 [2024-08-14 06:45:40.282874] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.155 [2024-08-14 06:45:40.282956] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.155 [2024-08-14 06:45:40.282981] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:13.155 06:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.155 06:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:14:13.415 06:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:14:13.415 06:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:14:13.415 06:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:13.415 06:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:13.674 06:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:13.674 06:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:13.674 06:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:13.674 06:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:13.933 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:13.933 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:14:14.212 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:14.212 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:14.212 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:14:14.212 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:14.212 06:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:14:14.212 06:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:14.212 06:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:14.212 06:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:14:14.212 06:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:14.212 06:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:14:14.212 06:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:14.212 06:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:14:14.212 06:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:14.212 06:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:14.212 06:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:14.472 [2024-08-14 06:45:41.569427] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:14.472 [2024-08-14 06:45:41.571712] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:14.472 [2024-08-14 06:45:41.571764] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:14.472 [2024-08-14 06:45:41.571800] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:14.472 [2024-08-14 06:45:41.571853] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:14.472 [2024-08-14 06:45:41.571922] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:14.472 [2024-08-14 06:45:41.571943] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:14.472 [2024-08-14 06:45:41.571963] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:14.472 [2024-08-14 06:45:41.571977] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:14.472 [2024-08-14 06:45:41.571990] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:14:14.472 request: 00:14:14.472 { 00:14:14.472 "name": "raid_bdev1", 00:14:14.472 "raid_level": "raid0", 00:14:14.472 "base_bdevs": [ 00:14:14.472 "malloc1", 00:14:14.472 "malloc2", 00:14:14.472 "malloc3", 00:14:14.472 "malloc4" 00:14:14.472 ], 00:14:14.472 "strip_size_kb": 64, 00:14:14.472 "superblock": false, 00:14:14.472 "method": "bdev_raid_create", 00:14:14.472 "req_id": 1 00:14:14.472 } 00:14:14.472 Got JSON-RPC error response 00:14:14.472 response: 00:14:14.472 { 00:14:14.472 "code": -17, 00:14:14.472 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:14.472 } 00:14:14.472 06:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:14:14.472 06:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:14:14.472 06:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:14:14.472 06:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:14:14.472 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.472 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:14:14.733 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:14:14.733 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:14:14.733 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:14.733 [2024-08-14 06:45:41.949384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:14.733 [2024-08-14 06:45:41.949490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.733 [2024-08-14 06:45:41.949513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:14.733 [2024-08-14 06:45:41.949531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.733 [2024-08-14 06:45:41.952258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.733 [2024-08-14 06:45:41.952295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:14.733 [2024-08-14 06:45:41.952394] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:14.733 [2024-08-14 06:45:41.952452] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:14.733 pt1 00:14:14.733 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:14.733 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:14.733 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:14.733 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:14.733 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:14.733 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:14.733 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.733 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.733 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.733 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.733 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.733 06:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.992 06:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.992 "name": "raid_bdev1", 00:14:14.992 "uuid": "d4f68f34-9ed4-4aec-8d6d-15bbd4698efb", 00:14:14.992 "strip_size_kb": 64, 00:14:14.992 "state": "configuring", 00:14:14.992 "raid_level": "raid0", 00:14:14.992 "superblock": true, 00:14:14.992 "num_base_bdevs": 4, 00:14:14.992 "num_base_bdevs_discovered": 1, 00:14:14.992 "num_base_bdevs_operational": 4, 00:14:14.992 "base_bdevs_list": [ 00:14:14.992 { 00:14:14.992 "name": "pt1", 00:14:14.992 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:14.992 "is_configured": true, 00:14:14.992 "data_offset": 2048, 00:14:14.992 "data_size": 63488 00:14:14.992 }, 00:14:14.992 { 00:14:14.992 "name": null, 00:14:14.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.992 "is_configured": false, 00:14:14.992 "data_offset": 2048, 00:14:14.992 "data_size": 63488 00:14:14.993 }, 00:14:14.993 { 00:14:14.993 "name": null, 00:14:14.993 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:14.993 "is_configured": false, 00:14:14.993 "data_offset": 2048, 00:14:14.993 "data_size": 63488 00:14:14.993 }, 00:14:14.993 { 00:14:14.993 "name": null, 00:14:14.993 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:14.993 "is_configured": false, 00:14:14.993 "data_offset": 2048, 00:14:14.993 "data_size": 63488 00:14:14.993 } 00:14:14.993 ] 00:14:14.993 }' 00:14:14.993 06:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.993 06:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.561 06:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:14:15.561 06:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:15.821 [2024-08-14 06:45:42.865393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:15.821 [2024-08-14 06:45:42.865968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.821 [2024-08-14 06:45:42.866019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:15.821 [2024-08-14 06:45:42.866041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.821 [2024-08-14 06:45:42.866629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.821 [2024-08-14 06:45:42.866662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:15.821 [2024-08-14 06:45:42.866765] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:15.821 [2024-08-14 06:45:42.866804] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:15.821 pt2 00:14:15.821 06:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:15.821 [2024-08-14 06:45:43.061478] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:16.080 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:16.080 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:16.080 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:16.080 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:16.080 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:16.080 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:16.080 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:16.080 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:16.080 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:16.080 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:16.080 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.080 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.080 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:16.080 "name": "raid_bdev1", 00:14:16.080 "uuid": "d4f68f34-9ed4-4aec-8d6d-15bbd4698efb", 00:14:16.080 "strip_size_kb": 64, 00:14:16.080 "state": "configuring", 00:14:16.080 "raid_level": "raid0", 00:14:16.080 "superblock": true, 00:14:16.080 "num_base_bdevs": 4, 00:14:16.080 "num_base_bdevs_discovered": 1, 00:14:16.080 "num_base_bdevs_operational": 4, 00:14:16.080 "base_bdevs_list": [ 00:14:16.080 { 00:14:16.080 "name": "pt1", 00:14:16.080 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:16.080 "is_configured": true, 00:14:16.080 "data_offset": 2048, 00:14:16.080 "data_size": 63488 00:14:16.080 }, 00:14:16.080 { 00:14:16.080 "name": null, 00:14:16.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:16.080 "is_configured": false, 00:14:16.080 "data_offset": 2048, 00:14:16.080 "data_size": 63488 00:14:16.080 }, 00:14:16.080 { 00:14:16.080 "name": null, 00:14:16.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:16.080 "is_configured": false, 00:14:16.080 "data_offset": 2048, 00:14:16.080 "data_size": 63488 00:14:16.080 }, 00:14:16.080 { 00:14:16.080 "name": null, 00:14:16.080 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:16.080 "is_configured": false, 00:14:16.080 "data_offset": 2048, 00:14:16.080 "data_size": 63488 00:14:16.080 } 00:14:16.080 ] 00:14:16.080 }' 00:14:16.080 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:16.080 06:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.649 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:14:16.649 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:16.649 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:16.909 [2024-08-14 06:45:43.972465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:16.909 [2024-08-14 06:45:43.972569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.909 [2024-08-14 06:45:43.972597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:16.909 [2024-08-14 06:45:43.972608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.909 [2024-08-14 06:45:43.973128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.909 [2024-08-14 06:45:43.973159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:16.909 [2024-08-14 06:45:43.973358] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:16.909 [2024-08-14 06:45:43.973397] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:16.909 pt2 00:14:16.909 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:14:16.909 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:16.909 06:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:17.169 [2024-08-14 06:45:44.176089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:17.169 [2024-08-14 06:45:44.176207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.169 [2024-08-14 06:45:44.176237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:17.169 [2024-08-14 06:45:44.176258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.169 [2024-08-14 06:45:44.176790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.169 [2024-08-14 06:45:44.176818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:17.169 [2024-08-14 06:45:44.176924] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:17.169 [2024-08-14 06:45:44.176962] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:17.169 pt3 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:17.169 [2024-08-14 06:45:44.351844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:17.169 [2024-08-14 06:45:44.351958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.169 [2024-08-14 06:45:44.351991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:17.169 [2024-08-14 06:45:44.352002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.169 [2024-08-14 06:45:44.352552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.169 [2024-08-14 06:45:44.352580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:17.169 [2024-08-14 06:45:44.352685] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:17.169 [2024-08-14 06:45:44.352722] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:17.169 [2024-08-14 06:45:44.352876] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:17.169 [2024-08-14 06:45:44.352892] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:17.169 [2024-08-14 06:45:44.353223] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:17.169 [2024-08-14 06:45:44.353371] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:17.169 [2024-08-14 06:45:44.353391] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:14:17.169 [2024-08-14 06:45:44.353512] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.169 pt4 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.169 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.428 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:17.429 "name": "raid_bdev1", 00:14:17.429 "uuid": "d4f68f34-9ed4-4aec-8d6d-15bbd4698efb", 00:14:17.429 "strip_size_kb": 64, 00:14:17.429 "state": "online", 00:14:17.429 "raid_level": "raid0", 00:14:17.429 "superblock": true, 00:14:17.429 "num_base_bdevs": 4, 00:14:17.429 "num_base_bdevs_discovered": 4, 00:14:17.429 "num_base_bdevs_operational": 4, 00:14:17.429 "base_bdevs_list": [ 00:14:17.429 { 00:14:17.429 "name": "pt1", 00:14:17.429 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:17.429 "is_configured": true, 00:14:17.429 "data_offset": 2048, 00:14:17.429 "data_size": 63488 00:14:17.429 }, 00:14:17.429 { 00:14:17.429 "name": "pt2", 00:14:17.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.429 "is_configured": true, 00:14:17.429 "data_offset": 2048, 00:14:17.429 "data_size": 63488 00:14:17.429 }, 00:14:17.429 { 00:14:17.429 "name": "pt3", 00:14:17.429 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:17.429 "is_configured": true, 00:14:17.429 "data_offset": 2048, 00:14:17.429 "data_size": 63488 00:14:17.429 }, 00:14:17.429 { 00:14:17.429 "name": "pt4", 00:14:17.429 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:17.429 "is_configured": true, 00:14:17.429 "data_offset": 2048, 00:14:17.429 "data_size": 63488 00:14:17.429 } 00:14:17.429 ] 00:14:17.429 }' 00:14:17.429 06:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:17.429 06:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.996 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:14:17.996 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:17.996 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:17.996 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:17.996 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:17.996 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:17.996 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:17.996 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:18.274 [2024-08-14 06:45:45.314641] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.274 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:18.274 "name": "raid_bdev1", 00:14:18.274 "aliases": [ 00:14:18.274 "d4f68f34-9ed4-4aec-8d6d-15bbd4698efb" 00:14:18.274 ], 00:14:18.274 "product_name": "Raid Volume", 00:14:18.274 "block_size": 512, 00:14:18.274 "num_blocks": 253952, 00:14:18.274 "uuid": "d4f68f34-9ed4-4aec-8d6d-15bbd4698efb", 00:14:18.274 "assigned_rate_limits": { 00:14:18.274 "rw_ios_per_sec": 0, 00:14:18.274 "rw_mbytes_per_sec": 0, 00:14:18.274 "r_mbytes_per_sec": 0, 00:14:18.274 "w_mbytes_per_sec": 0 00:14:18.274 }, 00:14:18.274 "claimed": false, 00:14:18.274 "zoned": false, 00:14:18.274 "supported_io_types": { 00:14:18.274 "read": true, 00:14:18.274 "write": true, 00:14:18.274 "unmap": true, 00:14:18.274 "flush": true, 00:14:18.274 "reset": true, 00:14:18.274 "nvme_admin": false, 00:14:18.274 "nvme_io": false, 00:14:18.274 "nvme_io_md": false, 00:14:18.274 "write_zeroes": true, 00:14:18.274 "zcopy": false, 00:14:18.274 "get_zone_info": false, 00:14:18.274 "zone_management": false, 00:14:18.274 "zone_append": false, 00:14:18.274 "compare": false, 00:14:18.274 "compare_and_write": false, 00:14:18.274 "abort": false, 00:14:18.274 "seek_hole": false, 00:14:18.274 "seek_data": false, 00:14:18.274 "copy": false, 00:14:18.274 "nvme_iov_md": false 00:14:18.274 }, 00:14:18.274 "memory_domains": [ 00:14:18.274 { 00:14:18.274 "dma_device_id": "system", 00:14:18.274 "dma_device_type": 1 00:14:18.274 }, 00:14:18.274 { 00:14:18.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.274 "dma_device_type": 2 00:14:18.274 }, 00:14:18.274 { 00:14:18.274 "dma_device_id": "system", 00:14:18.274 "dma_device_type": 1 00:14:18.274 }, 00:14:18.274 { 00:14:18.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.274 "dma_device_type": 2 00:14:18.274 }, 00:14:18.274 { 00:14:18.274 "dma_device_id": "system", 00:14:18.274 "dma_device_type": 1 00:14:18.274 }, 00:14:18.274 { 00:14:18.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.274 "dma_device_type": 2 00:14:18.274 }, 00:14:18.274 { 00:14:18.274 "dma_device_id": "system", 00:14:18.274 "dma_device_type": 1 00:14:18.274 }, 00:14:18.274 { 00:14:18.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.274 "dma_device_type": 2 00:14:18.274 } 00:14:18.274 ], 00:14:18.274 "driver_specific": { 00:14:18.274 "raid": { 00:14:18.274 "uuid": "d4f68f34-9ed4-4aec-8d6d-15bbd4698efb", 00:14:18.274 "strip_size_kb": 64, 00:14:18.274 "state": "online", 00:14:18.274 "raid_level": "raid0", 00:14:18.274 "superblock": true, 00:14:18.274 "num_base_bdevs": 4, 00:14:18.274 "num_base_bdevs_discovered": 4, 00:14:18.274 "num_base_bdevs_operational": 4, 00:14:18.274 "base_bdevs_list": [ 00:14:18.274 { 00:14:18.274 "name": "pt1", 00:14:18.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:18.274 "is_configured": true, 00:14:18.274 "data_offset": 2048, 00:14:18.274 "data_size": 63488 00:14:18.274 }, 00:14:18.274 { 00:14:18.274 "name": "pt2", 00:14:18.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:18.274 "is_configured": true, 00:14:18.274 "data_offset": 2048, 00:14:18.274 "data_size": 63488 00:14:18.274 }, 00:14:18.274 { 00:14:18.274 "name": "pt3", 00:14:18.274 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:18.274 "is_configured": true, 00:14:18.274 "data_offset": 2048, 00:14:18.274 "data_size": 63488 00:14:18.274 }, 00:14:18.274 { 00:14:18.274 "name": "pt4", 00:14:18.274 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:18.274 "is_configured": true, 00:14:18.274 "data_offset": 2048, 00:14:18.274 "data_size": 63488 00:14:18.274 } 00:14:18.274 ] 00:14:18.274 } 00:14:18.274 } 00:14:18.274 }' 00:14:18.274 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:18.274 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:18.274 pt2 00:14:18.274 pt3 00:14:18.274 pt4' 00:14:18.274 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:18.274 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:18.274 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:18.534 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:18.534 "name": "pt1", 00:14:18.534 "aliases": [ 00:14:18.534 "00000000-0000-0000-0000-000000000001" 00:14:18.534 ], 00:14:18.534 "product_name": "passthru", 00:14:18.534 "block_size": 512, 00:14:18.534 "num_blocks": 65536, 00:14:18.534 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:18.534 "assigned_rate_limits": { 00:14:18.534 "rw_ios_per_sec": 0, 00:14:18.534 "rw_mbytes_per_sec": 0, 00:14:18.534 "r_mbytes_per_sec": 0, 00:14:18.534 "w_mbytes_per_sec": 0 00:14:18.534 }, 00:14:18.534 "claimed": true, 00:14:18.534 "claim_type": "exclusive_write", 00:14:18.534 "zoned": false, 00:14:18.534 "supported_io_types": { 00:14:18.534 "read": true, 00:14:18.534 "write": true, 00:14:18.534 "unmap": true, 00:14:18.534 "flush": true, 00:14:18.534 "reset": true, 00:14:18.534 "nvme_admin": false, 00:14:18.534 "nvme_io": false, 00:14:18.534 "nvme_io_md": false, 00:14:18.534 "write_zeroes": true, 00:14:18.534 "zcopy": true, 00:14:18.534 "get_zone_info": false, 00:14:18.534 "zone_management": false, 00:14:18.534 "zone_append": false, 00:14:18.534 "compare": false, 00:14:18.534 "compare_and_write": false, 00:14:18.534 "abort": true, 00:14:18.534 "seek_hole": false, 00:14:18.534 "seek_data": false, 00:14:18.534 "copy": true, 00:14:18.534 "nvme_iov_md": false 00:14:18.534 }, 00:14:18.534 "memory_domains": [ 00:14:18.534 { 00:14:18.534 "dma_device_id": "system", 00:14:18.534 "dma_device_type": 1 00:14:18.534 }, 00:14:18.534 { 00:14:18.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.534 "dma_device_type": 2 00:14:18.534 } 00:14:18.534 ], 00:14:18.534 "driver_specific": { 00:14:18.534 "passthru": { 00:14:18.534 "name": "pt1", 00:14:18.534 "base_bdev_name": "malloc1" 00:14:18.534 } 00:14:18.534 } 00:14:18.534 }' 00:14:18.534 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:18.534 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:18.534 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:18.534 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:18.534 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:18.534 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:18.534 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:18.534 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:18.793 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:18.793 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:18.793 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:18.793 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:18.793 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:18.793 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:18.793 06:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:19.052 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:19.052 "name": "pt2", 00:14:19.052 "aliases": [ 00:14:19.052 "00000000-0000-0000-0000-000000000002" 00:14:19.052 ], 00:14:19.052 "product_name": "passthru", 00:14:19.052 "block_size": 512, 00:14:19.052 "num_blocks": 65536, 00:14:19.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:19.052 "assigned_rate_limits": { 00:14:19.052 "rw_ios_per_sec": 0, 00:14:19.052 "rw_mbytes_per_sec": 0, 00:14:19.052 "r_mbytes_per_sec": 0, 00:14:19.052 "w_mbytes_per_sec": 0 00:14:19.052 }, 00:14:19.052 "claimed": true, 00:14:19.052 "claim_type": "exclusive_write", 00:14:19.052 "zoned": false, 00:14:19.052 "supported_io_types": { 00:14:19.052 "read": true, 00:14:19.052 "write": true, 00:14:19.052 "unmap": true, 00:14:19.052 "flush": true, 00:14:19.052 "reset": true, 00:14:19.052 "nvme_admin": false, 00:14:19.052 "nvme_io": false, 00:14:19.052 "nvme_io_md": false, 00:14:19.052 "write_zeroes": true, 00:14:19.052 "zcopy": true, 00:14:19.052 "get_zone_info": false, 00:14:19.052 "zone_management": false, 00:14:19.052 "zone_append": false, 00:14:19.052 "compare": false, 00:14:19.052 "compare_and_write": false, 00:14:19.052 "abort": true, 00:14:19.052 "seek_hole": false, 00:14:19.052 "seek_data": false, 00:14:19.052 "copy": true, 00:14:19.052 "nvme_iov_md": false 00:14:19.052 }, 00:14:19.052 "memory_domains": [ 00:14:19.052 { 00:14:19.052 "dma_device_id": "system", 00:14:19.052 "dma_device_type": 1 00:14:19.052 }, 00:14:19.052 { 00:14:19.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.053 "dma_device_type": 2 00:14:19.053 } 00:14:19.053 ], 00:14:19.053 "driver_specific": { 00:14:19.053 "passthru": { 00:14:19.053 "name": "pt2", 00:14:19.053 "base_bdev_name": "malloc2" 00:14:19.053 } 00:14:19.053 } 00:14:19.053 }' 00:14:19.053 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:19.053 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:19.053 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:19.053 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:19.053 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:19.053 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:19.053 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:19.053 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:19.311 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:19.311 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:19.311 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:19.311 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:19.311 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:19.311 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:19.311 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:19.570 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:19.570 "name": "pt3", 00:14:19.570 "aliases": [ 00:14:19.570 "00000000-0000-0000-0000-000000000003" 00:14:19.570 ], 00:14:19.570 "product_name": "passthru", 00:14:19.570 "block_size": 512, 00:14:19.570 "num_blocks": 65536, 00:14:19.570 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:19.570 "assigned_rate_limits": { 00:14:19.570 "rw_ios_per_sec": 0, 00:14:19.570 "rw_mbytes_per_sec": 0, 00:14:19.570 "r_mbytes_per_sec": 0, 00:14:19.570 "w_mbytes_per_sec": 0 00:14:19.570 }, 00:14:19.570 "claimed": true, 00:14:19.570 "claim_type": "exclusive_write", 00:14:19.570 "zoned": false, 00:14:19.570 "supported_io_types": { 00:14:19.570 "read": true, 00:14:19.570 "write": true, 00:14:19.570 "unmap": true, 00:14:19.570 "flush": true, 00:14:19.570 "reset": true, 00:14:19.570 "nvme_admin": false, 00:14:19.570 "nvme_io": false, 00:14:19.570 "nvme_io_md": false, 00:14:19.570 "write_zeroes": true, 00:14:19.570 "zcopy": true, 00:14:19.570 "get_zone_info": false, 00:14:19.570 "zone_management": false, 00:14:19.570 "zone_append": false, 00:14:19.570 "compare": false, 00:14:19.570 "compare_and_write": false, 00:14:19.570 "abort": true, 00:14:19.570 "seek_hole": false, 00:14:19.570 "seek_data": false, 00:14:19.570 "copy": true, 00:14:19.570 "nvme_iov_md": false 00:14:19.570 }, 00:14:19.570 "memory_domains": [ 00:14:19.570 { 00:14:19.570 "dma_device_id": "system", 00:14:19.570 "dma_device_type": 1 00:14:19.570 }, 00:14:19.570 { 00:14:19.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.570 "dma_device_type": 2 00:14:19.570 } 00:14:19.570 ], 00:14:19.570 "driver_specific": { 00:14:19.570 "passthru": { 00:14:19.570 "name": "pt3", 00:14:19.570 "base_bdev_name": "malloc3" 00:14:19.570 } 00:14:19.570 } 00:14:19.570 }' 00:14:19.570 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:19.570 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:19.570 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:19.570 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:19.571 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:19.571 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:19.571 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:19.571 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:19.830 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:19.830 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:19.830 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:19.830 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:19.830 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:19.830 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:19.830 06:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:14:20.089 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:20.089 "name": "pt4", 00:14:20.089 "aliases": [ 00:14:20.089 "00000000-0000-0000-0000-000000000004" 00:14:20.089 ], 00:14:20.089 "product_name": "passthru", 00:14:20.089 "block_size": 512, 00:14:20.089 "num_blocks": 65536, 00:14:20.089 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:20.089 "assigned_rate_limits": { 00:14:20.089 "rw_ios_per_sec": 0, 00:14:20.089 "rw_mbytes_per_sec": 0, 00:14:20.089 "r_mbytes_per_sec": 0, 00:14:20.090 "w_mbytes_per_sec": 0 00:14:20.090 }, 00:14:20.090 "claimed": true, 00:14:20.090 "claim_type": "exclusive_write", 00:14:20.090 "zoned": false, 00:14:20.090 "supported_io_types": { 00:14:20.090 "read": true, 00:14:20.090 "write": true, 00:14:20.090 "unmap": true, 00:14:20.090 "flush": true, 00:14:20.090 "reset": true, 00:14:20.090 "nvme_admin": false, 00:14:20.090 "nvme_io": false, 00:14:20.090 "nvme_io_md": false, 00:14:20.090 "write_zeroes": true, 00:14:20.090 "zcopy": true, 00:14:20.090 "get_zone_info": false, 00:14:20.090 "zone_management": false, 00:14:20.090 "zone_append": false, 00:14:20.090 "compare": false, 00:14:20.090 "compare_and_write": false, 00:14:20.090 "abort": true, 00:14:20.090 "seek_hole": false, 00:14:20.090 "seek_data": false, 00:14:20.090 "copy": true, 00:14:20.090 "nvme_iov_md": false 00:14:20.090 }, 00:14:20.090 "memory_domains": [ 00:14:20.090 { 00:14:20.090 "dma_device_id": "system", 00:14:20.090 "dma_device_type": 1 00:14:20.090 }, 00:14:20.090 { 00:14:20.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.090 "dma_device_type": 2 00:14:20.090 } 00:14:20.090 ], 00:14:20.090 "driver_specific": { 00:14:20.090 "passthru": { 00:14:20.090 "name": "pt4", 00:14:20.090 "base_bdev_name": "malloc4" 00:14:20.090 } 00:14:20.090 } 00:14:20.090 }' 00:14:20.090 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:20.090 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:20.090 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:20.090 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:20.090 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:20.090 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:20.090 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:20.090 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:20.350 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:20.350 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:20.350 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:20.350 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:20.350 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:20.350 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:14:20.350 [2024-08-14 06:45:47.602824] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.609 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' d4f68f34-9ed4-4aec-8d6d-15bbd4698efb '!=' d4f68f34-9ed4-4aec-8d6d-15bbd4698efb ']' 00:14:20.609 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:14:20.609 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:20.609 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:20.609 06:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 85557 00:14:20.609 06:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 85557 ']' 00:14:20.609 06:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 85557 00:14:20.609 06:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:14:20.609 06:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:20.609 06:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85557 00:14:20.609 06:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:20.609 06:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:20.609 06:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85557' 00:14:20.609 killing process with pid 85557 00:14:20.609 06:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 85557 00:14:20.609 [2024-08-14 06:45:47.663568] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:20.609 [2024-08-14 06:45:47.663808] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.609 06:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 85557 00:14:20.609 [2024-08-14 06:45:47.663937] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:20.609 [2024-08-14 06:45:47.663998] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:14:20.609 [2024-08-14 06:45:47.745400] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.506 06:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:14:21.506 00:14:21.506 real 0m14.039s 00:14:21.506 user 0m25.316s 00:14:21.506 sys 0m2.234s 00:14:21.506 06:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:21.506 06:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.506 ************************************ 00:14:21.506 END TEST raid_superblock_test 00:14:21.506 ************************************ 00:14:21.506 06:45:48 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:14:21.506 06:45:48 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:21.506 06:45:48 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:21.506 06:45:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:21.506 ************************************ 00:14:21.506 START TEST raid_read_error_test 00:14:21.506 ************************************ 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 4 read 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.lrF1p6TF9P 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=86045 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 86045 /var/tmp/spdk-raid.sock 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 86045 ']' 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:21.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:21.506 06:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.506 [2024-08-14 06:45:48.304815] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:14:21.506 [2024-08-14 06:45:48.304954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86045 ] 00:14:21.506 [2024-08-14 06:45:48.455046] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.506 [2024-08-14 06:45:48.533595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.506 [2024-08-14 06:45:48.612204] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.506 [2024-08-14 06:45:48.612247] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.074 06:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:22.074 06:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:14:22.074 06:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:22.074 06:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:22.074 BaseBdev1_malloc 00:14:22.074 06:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:22.334 true 00:14:22.334 06:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:22.593 [2024-08-14 06:45:49.648505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:22.593 [2024-08-14 06:45:49.648619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.593 [2024-08-14 06:45:49.648657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:14:22.593 [2024-08-14 06:45:49.648673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.593 [2024-08-14 06:45:49.651380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.593 [2024-08-14 06:45:49.651427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:22.593 BaseBdev1 00:14:22.593 06:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:22.593 06:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:22.853 BaseBdev2_malloc 00:14:22.853 06:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:22.853 true 00:14:22.853 06:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:23.112 [2024-08-14 06:45:50.226684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:23.112 [2024-08-14 06:45:50.226783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.112 [2024-08-14 06:45:50.226822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:14:23.112 [2024-08-14 06:45:50.226833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.112 [2024-08-14 06:45:50.229402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.112 [2024-08-14 06:45:50.229521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:23.112 BaseBdev2 00:14:23.112 06:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:23.112 06:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:23.372 BaseBdev3_malloc 00:14:23.372 06:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:23.631 true 00:14:23.632 06:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:23.632 [2024-08-14 06:45:50.844251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:23.632 [2024-08-14 06:45:50.844344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.632 [2024-08-14 06:45:50.844374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:14:23.632 [2024-08-14 06:45:50.844386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.632 [2024-08-14 06:45:50.847015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.632 BaseBdev3 00:14:23.632 [2024-08-14 06:45:50.847143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:23.632 06:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:23.632 06:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:23.891 BaseBdev4_malloc 00:14:23.891 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:14:24.151 true 00:14:24.151 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:24.411 [2024-08-14 06:45:51.458342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:24.411 [2024-08-14 06:45:51.458561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.411 [2024-08-14 06:45:51.458610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:24.411 [2024-08-14 06:45:51.458648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.411 [2024-08-14 06:45:51.461383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.411 [2024-08-14 06:45:51.461471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:24.411 BaseBdev4 00:14:24.411 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:14:24.411 [2024-08-14 06:45:51.662089] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.411 [2024-08-14 06:45:51.664544] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.411 [2024-08-14 06:45:51.664705] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.411 [2024-08-14 06:45:51.664815] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:24.411 [2024-08-14 06:45:51.665094] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:14:24.411 [2024-08-14 06:45:51.665150] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:24.671 [2024-08-14 06:45:51.665612] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:24.671 [2024-08-14 06:45:51.665860] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:14:24.671 [2024-08-14 06:45:51.665904] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:14:24.671 [2024-08-14 06:45:51.666207] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.671 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:24.671 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:24.671 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:24.671 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:24.671 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:24.671 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:24.671 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:24.671 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:24.671 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:24.671 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:24.671 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.671 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.671 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:24.671 "name": "raid_bdev1", 00:14:24.671 "uuid": "85540df3-547c-4f84-939d-1e8015c358e4", 00:14:24.671 "strip_size_kb": 64, 00:14:24.671 "state": "online", 00:14:24.671 "raid_level": "raid0", 00:14:24.671 "superblock": true, 00:14:24.671 "num_base_bdevs": 4, 00:14:24.671 "num_base_bdevs_discovered": 4, 00:14:24.671 "num_base_bdevs_operational": 4, 00:14:24.671 "base_bdevs_list": [ 00:14:24.671 { 00:14:24.671 "name": "BaseBdev1", 00:14:24.671 "uuid": "5b25d69c-0ed1-5d85-8824-5ea5cfd4ce92", 00:14:24.671 "is_configured": true, 00:14:24.671 "data_offset": 2048, 00:14:24.671 "data_size": 63488 00:14:24.671 }, 00:14:24.671 { 00:14:24.671 "name": "BaseBdev2", 00:14:24.671 "uuid": "5446b1dc-0d42-59c8-8fb4-f98217ccbeea", 00:14:24.671 "is_configured": true, 00:14:24.671 "data_offset": 2048, 00:14:24.671 "data_size": 63488 00:14:24.671 }, 00:14:24.671 { 00:14:24.671 "name": "BaseBdev3", 00:14:24.671 "uuid": "8a110caa-60c5-5c02-9d69-849f6152b57e", 00:14:24.671 "is_configured": true, 00:14:24.671 "data_offset": 2048, 00:14:24.671 "data_size": 63488 00:14:24.671 }, 00:14:24.671 { 00:14:24.671 "name": "BaseBdev4", 00:14:24.671 "uuid": "5ed55f39-a114-56b9-bfa5-2d3d5372702f", 00:14:24.671 "is_configured": true, 00:14:24.671 "data_offset": 2048, 00:14:24.671 "data_size": 63488 00:14:24.671 } 00:14:24.671 ] 00:14:24.671 }' 00:14:24.671 06:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:24.671 06:45:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.240 06:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:25.240 06:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:14:25.240 [2024-08-14 06:45:52.465455] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:26.234 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:26.494 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:14:26.494 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:26.494 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:14:26.494 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:26.494 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:26.494 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:26.494 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:26.494 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:26.494 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:26.494 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:26.494 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:26.494 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:26.494 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:26.494 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.494 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.753 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:26.753 "name": "raid_bdev1", 00:14:26.753 "uuid": "85540df3-547c-4f84-939d-1e8015c358e4", 00:14:26.753 "strip_size_kb": 64, 00:14:26.753 "state": "online", 00:14:26.753 "raid_level": "raid0", 00:14:26.753 "superblock": true, 00:14:26.753 "num_base_bdevs": 4, 00:14:26.753 "num_base_bdevs_discovered": 4, 00:14:26.753 "num_base_bdevs_operational": 4, 00:14:26.753 "base_bdevs_list": [ 00:14:26.753 { 00:14:26.753 "name": "BaseBdev1", 00:14:26.753 "uuid": "5b25d69c-0ed1-5d85-8824-5ea5cfd4ce92", 00:14:26.753 "is_configured": true, 00:14:26.753 "data_offset": 2048, 00:14:26.753 "data_size": 63488 00:14:26.753 }, 00:14:26.753 { 00:14:26.753 "name": "BaseBdev2", 00:14:26.753 "uuid": "5446b1dc-0d42-59c8-8fb4-f98217ccbeea", 00:14:26.753 "is_configured": true, 00:14:26.753 "data_offset": 2048, 00:14:26.753 "data_size": 63488 00:14:26.753 }, 00:14:26.753 { 00:14:26.753 "name": "BaseBdev3", 00:14:26.753 "uuid": "8a110caa-60c5-5c02-9d69-849f6152b57e", 00:14:26.753 "is_configured": true, 00:14:26.753 "data_offset": 2048, 00:14:26.753 "data_size": 63488 00:14:26.753 }, 00:14:26.753 { 00:14:26.753 "name": "BaseBdev4", 00:14:26.753 "uuid": "5ed55f39-a114-56b9-bfa5-2d3d5372702f", 00:14:26.753 "is_configured": true, 00:14:26.753 "data_offset": 2048, 00:14:26.753 "data_size": 63488 00:14:26.753 } 00:14:26.753 ] 00:14:26.753 }' 00:14:26.753 06:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:26.753 06:45:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.323 06:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:27.323 [2024-08-14 06:45:54.480658] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.323 [2024-08-14 06:45:54.480716] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.323 [2024-08-14 06:45:54.483250] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.323 [2024-08-14 06:45:54.483309] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.323 [2024-08-14 06:45:54.483374] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.323 [2024-08-14 06:45:54.483404] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:14:27.323 0 00:14:27.323 06:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 86045 00:14:27.323 06:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 86045 ']' 00:14:27.323 06:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 86045 00:14:27.323 06:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:14:27.323 06:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:27.323 06:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86045 00:14:27.323 killing process with pid 86045 00:14:27.323 06:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:27.323 06:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:27.324 06:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86045' 00:14:27.324 06:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 86045 00:14:27.324 [2024-08-14 06:45:54.537590] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.324 06:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 86045 00:14:27.583 [2024-08-14 06:45:54.604734] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:27.843 06:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.lrF1p6TF9P 00:14:27.843 06:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:14:27.843 06:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:14:27.843 06:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.50 00:14:27.843 06:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:14:27.843 06:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:27.843 06:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:27.843 06:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.50 != \0\.\0\0 ]] 00:14:27.843 00:14:27.843 real 0m6.788s 00:14:27.843 user 0m10.476s 00:14:27.843 sys 0m1.063s 00:14:27.843 06:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:27.843 06:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.843 ************************************ 00:14:27.843 END TEST raid_read_error_test 00:14:27.843 ************************************ 00:14:27.843 06:45:55 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:14:27.843 06:45:55 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:27.843 06:45:55 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:27.843 06:45:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:27.843 ************************************ 00:14:27.843 START TEST raid_write_error_test 00:14:27.843 ************************************ 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 4 write 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:27.843 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.o9v3y2RwKi 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=86234 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 86234 /var/tmp/spdk-raid.sock 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 86234 ']' 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:27.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:27.844 06:45:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.104 [2024-08-14 06:45:55.159050] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:14:28.104 [2024-08-14 06:45:55.159626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86234 ] 00:14:28.104 [2024-08-14 06:45:55.305797] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.364 [2024-08-14 06:45:55.382842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.364 [2024-08-14 06:45:55.460845] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.364 [2024-08-14 06:45:55.460902] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.932 06:45:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:28.932 06:45:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:14:28.932 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:28.932 06:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:28.932 BaseBdev1_malloc 00:14:28.932 06:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:29.192 true 00:14:29.192 06:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:29.453 [2024-08-14 06:45:56.604355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:29.453 [2024-08-14 06:45:56.604452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.453 [2024-08-14 06:45:56.604483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:14:29.453 [2024-08-14 06:45:56.604501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.453 [2024-08-14 06:45:56.607553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.453 [2024-08-14 06:45:56.607601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:29.453 BaseBdev1 00:14:29.453 06:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:29.453 06:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:29.713 BaseBdev2_malloc 00:14:29.713 06:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:29.972 true 00:14:29.972 06:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:30.232 [2024-08-14 06:45:57.258716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:30.232 [2024-08-14 06:45:57.258811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.232 [2024-08-14 06:45:57.258841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:14:30.232 [2024-08-14 06:45:57.258853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.232 [2024-08-14 06:45:57.261504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.232 [2024-08-14 06:45:57.261561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:30.232 BaseBdev2 00:14:30.232 06:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:30.232 06:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:30.491 BaseBdev3_malloc 00:14:30.491 06:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:30.491 true 00:14:30.491 06:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:30.751 [2024-08-14 06:45:57.916696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:30.751 [2024-08-14 06:45:57.916785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.751 [2024-08-14 06:45:57.916814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:14:30.751 [2024-08-14 06:45:57.916826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.751 [2024-08-14 06:45:57.919642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.751 [2024-08-14 06:45:57.919685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:30.751 BaseBdev3 00:14:30.751 06:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:30.751 06:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:31.011 BaseBdev4_malloc 00:14:31.011 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:14:31.272 true 00:14:31.272 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:31.533 [2024-08-14 06:45:58.543226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:31.533 [2024-08-14 06:45:58.543313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.533 [2024-08-14 06:45:58.543344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:31.533 [2024-08-14 06:45:58.543360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.533 [2024-08-14 06:45:58.546068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.533 [2024-08-14 06:45:58.546114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:31.533 BaseBdev4 00:14:31.533 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:14:31.533 [2024-08-14 06:45:58.746965] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.533 [2024-08-14 06:45:58.749250] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:31.533 [2024-08-14 06:45:58.749372] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.533 [2024-08-14 06:45:58.749452] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:31.533 [2024-08-14 06:45:58.749708] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:14:31.533 [2024-08-14 06:45:58.749732] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:31.533 [2024-08-14 06:45:58.750120] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:31.533 [2024-08-14 06:45:58.750340] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:14:31.533 [2024-08-14 06:45:58.750360] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:14:31.533 [2024-08-14 06:45:58.750553] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.533 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:31.533 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:31.533 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:31.533 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:31.533 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:31.533 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:31.533 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:31.533 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:31.533 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:31.533 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:31.533 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.533 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.792 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:31.792 "name": "raid_bdev1", 00:14:31.792 "uuid": "6dd8df94-81f7-4b20-9c11-95e6f1ea09b2", 00:14:31.792 "strip_size_kb": 64, 00:14:31.792 "state": "online", 00:14:31.792 "raid_level": "raid0", 00:14:31.792 "superblock": true, 00:14:31.792 "num_base_bdevs": 4, 00:14:31.792 "num_base_bdevs_discovered": 4, 00:14:31.792 "num_base_bdevs_operational": 4, 00:14:31.792 "base_bdevs_list": [ 00:14:31.792 { 00:14:31.792 "name": "BaseBdev1", 00:14:31.792 "uuid": "99a9a97a-4a6a-57cf-b5ee-e0138a90c892", 00:14:31.792 "is_configured": true, 00:14:31.792 "data_offset": 2048, 00:14:31.792 "data_size": 63488 00:14:31.792 }, 00:14:31.792 { 00:14:31.792 "name": "BaseBdev2", 00:14:31.792 "uuid": "ac881c06-8981-58bd-9e1e-9f8e069a7efd", 00:14:31.792 "is_configured": true, 00:14:31.792 "data_offset": 2048, 00:14:31.793 "data_size": 63488 00:14:31.793 }, 00:14:31.793 { 00:14:31.793 "name": "BaseBdev3", 00:14:31.793 "uuid": "2c7547aa-3ce8-5eea-95f5-bef69bf32e88", 00:14:31.793 "is_configured": true, 00:14:31.793 "data_offset": 2048, 00:14:31.793 "data_size": 63488 00:14:31.793 }, 00:14:31.793 { 00:14:31.793 "name": "BaseBdev4", 00:14:31.793 "uuid": "2af74f8a-6554-51a9-8876-a79cafe05f1c", 00:14:31.793 "is_configured": true, 00:14:31.793 "data_offset": 2048, 00:14:31.793 "data_size": 63488 00:14:31.793 } 00:14:31.793 ] 00:14:31.793 }' 00:14:31.793 06:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:31.793 06:45:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.362 06:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:14:32.362 06:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:32.362 [2024-08-14 06:45:59.538080] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:33.300 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:33.560 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:14:33.560 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:33.560 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:14:33.560 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:33.560 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:33.560 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:33.560 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:33.560 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:33.560 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:33.560 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:33.560 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:33.560 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:33.560 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:33.560 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.560 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.820 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.820 "name": "raid_bdev1", 00:14:33.820 "uuid": "6dd8df94-81f7-4b20-9c11-95e6f1ea09b2", 00:14:33.820 "strip_size_kb": 64, 00:14:33.820 "state": "online", 00:14:33.820 "raid_level": "raid0", 00:14:33.820 "superblock": true, 00:14:33.820 "num_base_bdevs": 4, 00:14:33.820 "num_base_bdevs_discovered": 4, 00:14:33.820 "num_base_bdevs_operational": 4, 00:14:33.820 "base_bdevs_list": [ 00:14:33.820 { 00:14:33.820 "name": "BaseBdev1", 00:14:33.820 "uuid": "99a9a97a-4a6a-57cf-b5ee-e0138a90c892", 00:14:33.820 "is_configured": true, 00:14:33.820 "data_offset": 2048, 00:14:33.820 "data_size": 63488 00:14:33.820 }, 00:14:33.820 { 00:14:33.820 "name": "BaseBdev2", 00:14:33.820 "uuid": "ac881c06-8981-58bd-9e1e-9f8e069a7efd", 00:14:33.820 "is_configured": true, 00:14:33.820 "data_offset": 2048, 00:14:33.820 "data_size": 63488 00:14:33.820 }, 00:14:33.820 { 00:14:33.820 "name": "BaseBdev3", 00:14:33.820 "uuid": "2c7547aa-3ce8-5eea-95f5-bef69bf32e88", 00:14:33.820 "is_configured": true, 00:14:33.820 "data_offset": 2048, 00:14:33.820 "data_size": 63488 00:14:33.820 }, 00:14:33.820 { 00:14:33.820 "name": "BaseBdev4", 00:14:33.820 "uuid": "2af74f8a-6554-51a9-8876-a79cafe05f1c", 00:14:33.820 "is_configured": true, 00:14:33.820 "data_offset": 2048, 00:14:33.820 "data_size": 63488 00:14:33.820 } 00:14:33.820 ] 00:14:33.820 }' 00:14:33.820 06:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.820 06:46:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.389 06:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:34.389 [2024-08-14 06:46:01.613224] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:34.389 [2024-08-14 06:46:01.613290] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.389 [2024-08-14 06:46:01.615875] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.389 [2024-08-14 06:46:01.615939] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.389 [2024-08-14 06:46:01.615989] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.389 [2024-08-14 06:46:01.616002] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:14:34.389 0 00:14:34.389 06:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 86234 00:14:34.389 06:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 86234 ']' 00:14:34.389 06:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 86234 00:14:34.389 06:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:14:34.649 06:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:34.649 06:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86234 00:14:34.649 06:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:34.649 06:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:34.649 killing process with pid 86234 00:14:34.649 06:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86234' 00:14:34.649 06:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 86234 00:14:34.649 [2024-08-14 06:46:01.676103] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.649 06:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 86234 00:14:34.649 [2024-08-14 06:46:01.744971] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.907 06:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.o9v3y2RwKi 00:14:34.907 06:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:14:34.907 06:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:14:34.907 06:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.48 00:14:34.907 06:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:14:34.907 ************************************ 00:14:34.907 END TEST raid_write_error_test 00:14:34.907 ************************************ 00:14:34.907 06:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:34.907 06:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:34.907 06:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.48 != \0\.\0\0 ]] 00:14:34.907 00:14:34.907 real 0m7.071s 00:14:34.907 user 0m11.029s 00:14:34.907 sys 0m1.094s 00:14:34.907 06:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:34.907 06:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.166 06:46:02 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:14:35.166 06:46:02 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:14:35.166 06:46:02 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:35.166 06:46:02 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:35.166 06:46:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.166 ************************************ 00:14:35.166 START TEST raid_state_function_test 00:14:35.166 ************************************ 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 false 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:35.166 Process raid pid: 86416 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=86416 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 86416' 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 86416 /var/tmp/spdk-raid.sock 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 86416 ']' 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:35.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:35.166 06:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.166 [2024-08-14 06:46:02.302538] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:14:35.166 [2024-08-14 06:46:02.302673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.426 [2024-08-14 06:46:02.448917] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.426 [2024-08-14 06:46:02.528207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.426 [2024-08-14 06:46:02.606027] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.426 [2024-08-14 06:46:02.606071] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.995 06:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:35.995 06:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:14:35.995 06:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:36.256 [2024-08-14 06:46:03.290848] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.256 [2024-08-14 06:46:03.291045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.256 [2024-08-14 06:46:03.291067] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.256 [2024-08-14 06:46:03.291077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.256 [2024-08-14 06:46:03.291092] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:36.256 [2024-08-14 06:46:03.291100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:36.256 [2024-08-14 06:46:03.291113] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:36.256 [2024-08-14 06:46:03.291121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:36.256 06:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:36.256 06:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:36.256 06:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:36.256 06:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:36.256 06:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:36.256 06:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:36.256 06:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:36.256 06:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:36.256 06:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:36.256 06:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:36.256 06:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.256 06:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.256 06:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.256 "name": "Existed_Raid", 00:14:36.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.256 "strip_size_kb": 64, 00:14:36.256 "state": "configuring", 00:14:36.256 "raid_level": "concat", 00:14:36.256 "superblock": false, 00:14:36.256 "num_base_bdevs": 4, 00:14:36.256 "num_base_bdevs_discovered": 0, 00:14:36.256 "num_base_bdevs_operational": 4, 00:14:36.256 "base_bdevs_list": [ 00:14:36.256 { 00:14:36.256 "name": "BaseBdev1", 00:14:36.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.256 "is_configured": false, 00:14:36.256 "data_offset": 0, 00:14:36.256 "data_size": 0 00:14:36.256 }, 00:14:36.256 { 00:14:36.256 "name": "BaseBdev2", 00:14:36.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.256 "is_configured": false, 00:14:36.256 "data_offset": 0, 00:14:36.256 "data_size": 0 00:14:36.256 }, 00:14:36.256 { 00:14:36.256 "name": "BaseBdev3", 00:14:36.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.256 "is_configured": false, 00:14:36.256 "data_offset": 0, 00:14:36.256 "data_size": 0 00:14:36.256 }, 00:14:36.256 { 00:14:36.256 "name": "BaseBdev4", 00:14:36.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.256 "is_configured": false, 00:14:36.256 "data_offset": 0, 00:14:36.256 "data_size": 0 00:14:36.256 } 00:14:36.256 ] 00:14:36.256 }' 00:14:36.256 06:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.256 06:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.825 06:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:37.085 [2024-08-14 06:46:04.193197] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:37.085 [2024-08-14 06:46:04.193382] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:37.085 06:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:37.345 [2024-08-14 06:46:04.396866] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:37.345 [2024-08-14 06:46:04.397057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:37.345 [2024-08-14 06:46:04.397094] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:37.345 [2024-08-14 06:46:04.397118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:37.345 [2024-08-14 06:46:04.397143] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:37.345 [2024-08-14 06:46:04.397176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:37.345 [2024-08-14 06:46:04.397203] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:37.345 [2024-08-14 06:46:04.397245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:37.345 06:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:37.605 [2024-08-14 06:46:04.692626] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.605 BaseBdev1 00:14:37.605 06:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:37.605 06:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:37.605 06:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:37.605 06:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:37.605 06:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:37.605 06:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:37.605 06:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:37.865 06:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:38.125 [ 00:14:38.125 { 00:14:38.125 "name": "BaseBdev1", 00:14:38.125 "aliases": [ 00:14:38.125 "3c7850d9-bc4f-46db-9e5d-7ad26b4fcc94" 00:14:38.125 ], 00:14:38.125 "product_name": "Malloc disk", 00:14:38.125 "block_size": 512, 00:14:38.125 "num_blocks": 65536, 00:14:38.125 "uuid": "3c7850d9-bc4f-46db-9e5d-7ad26b4fcc94", 00:14:38.125 "assigned_rate_limits": { 00:14:38.125 "rw_ios_per_sec": 0, 00:14:38.125 "rw_mbytes_per_sec": 0, 00:14:38.125 "r_mbytes_per_sec": 0, 00:14:38.125 "w_mbytes_per_sec": 0 00:14:38.125 }, 00:14:38.125 "claimed": true, 00:14:38.125 "claim_type": "exclusive_write", 00:14:38.125 "zoned": false, 00:14:38.125 "supported_io_types": { 00:14:38.125 "read": true, 00:14:38.125 "write": true, 00:14:38.125 "unmap": true, 00:14:38.125 "flush": true, 00:14:38.125 "reset": true, 00:14:38.125 "nvme_admin": false, 00:14:38.125 "nvme_io": false, 00:14:38.125 "nvme_io_md": false, 00:14:38.125 "write_zeroes": true, 00:14:38.125 "zcopy": true, 00:14:38.125 "get_zone_info": false, 00:14:38.125 "zone_management": false, 00:14:38.125 "zone_append": false, 00:14:38.125 "compare": false, 00:14:38.125 "compare_and_write": false, 00:14:38.125 "abort": true, 00:14:38.125 "seek_hole": false, 00:14:38.125 "seek_data": false, 00:14:38.125 "copy": true, 00:14:38.125 "nvme_iov_md": false 00:14:38.125 }, 00:14:38.125 "memory_domains": [ 00:14:38.125 { 00:14:38.125 "dma_device_id": "system", 00:14:38.125 "dma_device_type": 1 00:14:38.125 }, 00:14:38.125 { 00:14:38.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.125 "dma_device_type": 2 00:14:38.125 } 00:14:38.125 ], 00:14:38.125 "driver_specific": {} 00:14:38.125 } 00:14:38.125 ] 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:38.125 "name": "Existed_Raid", 00:14:38.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.125 "strip_size_kb": 64, 00:14:38.125 "state": "configuring", 00:14:38.125 "raid_level": "concat", 00:14:38.125 "superblock": false, 00:14:38.125 "num_base_bdevs": 4, 00:14:38.125 "num_base_bdevs_discovered": 1, 00:14:38.125 "num_base_bdevs_operational": 4, 00:14:38.125 "base_bdevs_list": [ 00:14:38.125 { 00:14:38.125 "name": "BaseBdev1", 00:14:38.125 "uuid": "3c7850d9-bc4f-46db-9e5d-7ad26b4fcc94", 00:14:38.125 "is_configured": true, 00:14:38.125 "data_offset": 0, 00:14:38.125 "data_size": 65536 00:14:38.125 }, 00:14:38.125 { 00:14:38.125 "name": "BaseBdev2", 00:14:38.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.125 "is_configured": false, 00:14:38.125 "data_offset": 0, 00:14:38.125 "data_size": 0 00:14:38.125 }, 00:14:38.125 { 00:14:38.125 "name": "BaseBdev3", 00:14:38.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.125 "is_configured": false, 00:14:38.125 "data_offset": 0, 00:14:38.125 "data_size": 0 00:14:38.125 }, 00:14:38.125 { 00:14:38.125 "name": "BaseBdev4", 00:14:38.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.125 "is_configured": false, 00:14:38.125 "data_offset": 0, 00:14:38.125 "data_size": 0 00:14:38.125 } 00:14:38.125 ] 00:14:38.125 }' 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:38.125 06:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.696 06:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:38.955 [2024-08-14 06:46:06.062604] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:38.955 [2024-08-14 06:46:06.062822] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:38.955 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:39.215 [2024-08-14 06:46:06.270376] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.215 [2024-08-14 06:46:06.272960] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:39.215 [2024-08-14 06:46:06.273067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:39.215 [2024-08-14 06:46:06.273110] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:39.215 [2024-08-14 06:46:06.273136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:39.215 [2024-08-14 06:46:06.273160] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:39.215 [2024-08-14 06:46:06.273205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:39.215 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:39.215 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:39.215 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:39.215 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:39.215 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:39.215 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:39.215 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:39.215 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:39.215 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:39.215 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:39.215 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:39.215 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:39.215 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.215 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.475 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:39.475 "name": "Existed_Raid", 00:14:39.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.475 "strip_size_kb": 64, 00:14:39.475 "state": "configuring", 00:14:39.475 "raid_level": "concat", 00:14:39.475 "superblock": false, 00:14:39.475 "num_base_bdevs": 4, 00:14:39.475 "num_base_bdevs_discovered": 1, 00:14:39.475 "num_base_bdevs_operational": 4, 00:14:39.475 "base_bdevs_list": [ 00:14:39.475 { 00:14:39.475 "name": "BaseBdev1", 00:14:39.475 "uuid": "3c7850d9-bc4f-46db-9e5d-7ad26b4fcc94", 00:14:39.475 "is_configured": true, 00:14:39.475 "data_offset": 0, 00:14:39.475 "data_size": 65536 00:14:39.475 }, 00:14:39.475 { 00:14:39.475 "name": "BaseBdev2", 00:14:39.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.475 "is_configured": false, 00:14:39.475 "data_offset": 0, 00:14:39.475 "data_size": 0 00:14:39.475 }, 00:14:39.475 { 00:14:39.475 "name": "BaseBdev3", 00:14:39.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.475 "is_configured": false, 00:14:39.475 "data_offset": 0, 00:14:39.475 "data_size": 0 00:14:39.475 }, 00:14:39.475 { 00:14:39.475 "name": "BaseBdev4", 00:14:39.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.475 "is_configured": false, 00:14:39.475 "data_offset": 0, 00:14:39.475 "data_size": 0 00:14:39.475 } 00:14:39.475 ] 00:14:39.475 }' 00:14:39.475 06:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:39.475 06:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.097 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:40.097 [2024-08-14 06:46:07.240925] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.097 BaseBdev2 00:14:40.097 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:40.097 06:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:40.097 06:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:40.097 06:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:40.097 06:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:40.097 06:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:40.097 06:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:40.357 06:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:40.617 [ 00:14:40.617 { 00:14:40.617 "name": "BaseBdev2", 00:14:40.617 "aliases": [ 00:14:40.617 "2bb3d484-5f24-435b-a314-d7063e71cd9b" 00:14:40.617 ], 00:14:40.617 "product_name": "Malloc disk", 00:14:40.617 "block_size": 512, 00:14:40.617 "num_blocks": 65536, 00:14:40.617 "uuid": "2bb3d484-5f24-435b-a314-d7063e71cd9b", 00:14:40.617 "assigned_rate_limits": { 00:14:40.617 "rw_ios_per_sec": 0, 00:14:40.617 "rw_mbytes_per_sec": 0, 00:14:40.617 "r_mbytes_per_sec": 0, 00:14:40.617 "w_mbytes_per_sec": 0 00:14:40.617 }, 00:14:40.617 "claimed": true, 00:14:40.617 "claim_type": "exclusive_write", 00:14:40.617 "zoned": false, 00:14:40.617 "supported_io_types": { 00:14:40.617 "read": true, 00:14:40.617 "write": true, 00:14:40.617 "unmap": true, 00:14:40.617 "flush": true, 00:14:40.617 "reset": true, 00:14:40.617 "nvme_admin": false, 00:14:40.617 "nvme_io": false, 00:14:40.617 "nvme_io_md": false, 00:14:40.617 "write_zeroes": true, 00:14:40.617 "zcopy": true, 00:14:40.617 "get_zone_info": false, 00:14:40.617 "zone_management": false, 00:14:40.617 "zone_append": false, 00:14:40.617 "compare": false, 00:14:40.617 "compare_and_write": false, 00:14:40.617 "abort": true, 00:14:40.617 "seek_hole": false, 00:14:40.617 "seek_data": false, 00:14:40.617 "copy": true, 00:14:40.617 "nvme_iov_md": false 00:14:40.617 }, 00:14:40.617 "memory_domains": [ 00:14:40.617 { 00:14:40.617 "dma_device_id": "system", 00:14:40.617 "dma_device_type": 1 00:14:40.617 }, 00:14:40.617 { 00:14:40.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.617 "dma_device_type": 2 00:14:40.617 } 00:14:40.617 ], 00:14:40.617 "driver_specific": {} 00:14:40.617 } 00:14:40.617 ] 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.617 "name": "Existed_Raid", 00:14:40.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.617 "strip_size_kb": 64, 00:14:40.617 "state": "configuring", 00:14:40.617 "raid_level": "concat", 00:14:40.617 "superblock": false, 00:14:40.617 "num_base_bdevs": 4, 00:14:40.617 "num_base_bdevs_discovered": 2, 00:14:40.617 "num_base_bdevs_operational": 4, 00:14:40.617 "base_bdevs_list": [ 00:14:40.617 { 00:14:40.617 "name": "BaseBdev1", 00:14:40.617 "uuid": "3c7850d9-bc4f-46db-9e5d-7ad26b4fcc94", 00:14:40.617 "is_configured": true, 00:14:40.617 "data_offset": 0, 00:14:40.617 "data_size": 65536 00:14:40.617 }, 00:14:40.617 { 00:14:40.617 "name": "BaseBdev2", 00:14:40.617 "uuid": "2bb3d484-5f24-435b-a314-d7063e71cd9b", 00:14:40.617 "is_configured": true, 00:14:40.617 "data_offset": 0, 00:14:40.617 "data_size": 65536 00:14:40.617 }, 00:14:40.617 { 00:14:40.617 "name": "BaseBdev3", 00:14:40.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.617 "is_configured": false, 00:14:40.617 "data_offset": 0, 00:14:40.617 "data_size": 0 00:14:40.617 }, 00:14:40.617 { 00:14:40.617 "name": "BaseBdev4", 00:14:40.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.617 "is_configured": false, 00:14:40.617 "data_offset": 0, 00:14:40.617 "data_size": 0 00:14:40.617 } 00:14:40.617 ] 00:14:40.617 }' 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.617 06:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.188 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:41.449 [2024-08-14 06:46:08.612211] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:41.449 BaseBdev3 00:14:41.449 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:41.449 06:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:41.449 06:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:41.449 06:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:41.449 06:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:41.449 06:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:41.449 06:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:41.709 06:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:41.968 [ 00:14:41.968 { 00:14:41.968 "name": "BaseBdev3", 00:14:41.968 "aliases": [ 00:14:41.968 "66d8b913-9bb8-4e7d-b102-fdb695cbdc31" 00:14:41.968 ], 00:14:41.968 "product_name": "Malloc disk", 00:14:41.968 "block_size": 512, 00:14:41.968 "num_blocks": 65536, 00:14:41.968 "uuid": "66d8b913-9bb8-4e7d-b102-fdb695cbdc31", 00:14:41.968 "assigned_rate_limits": { 00:14:41.968 "rw_ios_per_sec": 0, 00:14:41.969 "rw_mbytes_per_sec": 0, 00:14:41.969 "r_mbytes_per_sec": 0, 00:14:41.969 "w_mbytes_per_sec": 0 00:14:41.969 }, 00:14:41.969 "claimed": true, 00:14:41.969 "claim_type": "exclusive_write", 00:14:41.969 "zoned": false, 00:14:41.969 "supported_io_types": { 00:14:41.969 "read": true, 00:14:41.969 "write": true, 00:14:41.969 "unmap": true, 00:14:41.969 "flush": true, 00:14:41.969 "reset": true, 00:14:41.969 "nvme_admin": false, 00:14:41.969 "nvme_io": false, 00:14:41.969 "nvme_io_md": false, 00:14:41.969 "write_zeroes": true, 00:14:41.969 "zcopy": true, 00:14:41.969 "get_zone_info": false, 00:14:41.969 "zone_management": false, 00:14:41.969 "zone_append": false, 00:14:41.969 "compare": false, 00:14:41.969 "compare_and_write": false, 00:14:41.969 "abort": true, 00:14:41.969 "seek_hole": false, 00:14:41.969 "seek_data": false, 00:14:41.969 "copy": true, 00:14:41.969 "nvme_iov_md": false 00:14:41.969 }, 00:14:41.969 "memory_domains": [ 00:14:41.969 { 00:14:41.969 "dma_device_id": "system", 00:14:41.969 "dma_device_type": 1 00:14:41.969 }, 00:14:41.969 { 00:14:41.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.969 "dma_device_type": 2 00:14:41.969 } 00:14:41.969 ], 00:14:41.969 "driver_specific": {} 00:14:41.969 } 00:14:41.969 ] 00:14:41.969 06:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:41.969 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:41.969 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:41.969 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:41.969 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:41.969 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:41.969 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:41.969 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:41.969 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:41.969 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.969 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.969 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.969 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.969 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.969 06:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.969 06:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:41.969 "name": "Existed_Raid", 00:14:41.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.969 "strip_size_kb": 64, 00:14:41.969 "state": "configuring", 00:14:41.969 "raid_level": "concat", 00:14:41.969 "superblock": false, 00:14:41.969 "num_base_bdevs": 4, 00:14:41.969 "num_base_bdevs_discovered": 3, 00:14:41.969 "num_base_bdevs_operational": 4, 00:14:41.969 "base_bdevs_list": [ 00:14:41.969 { 00:14:41.969 "name": "BaseBdev1", 00:14:41.969 "uuid": "3c7850d9-bc4f-46db-9e5d-7ad26b4fcc94", 00:14:41.969 "is_configured": true, 00:14:41.969 "data_offset": 0, 00:14:41.969 "data_size": 65536 00:14:41.969 }, 00:14:41.969 { 00:14:41.969 "name": "BaseBdev2", 00:14:41.969 "uuid": "2bb3d484-5f24-435b-a314-d7063e71cd9b", 00:14:41.969 "is_configured": true, 00:14:41.969 "data_offset": 0, 00:14:41.969 "data_size": 65536 00:14:41.969 }, 00:14:41.969 { 00:14:41.969 "name": "BaseBdev3", 00:14:41.969 "uuid": "66d8b913-9bb8-4e7d-b102-fdb695cbdc31", 00:14:41.969 "is_configured": true, 00:14:41.969 "data_offset": 0, 00:14:41.969 "data_size": 65536 00:14:41.969 }, 00:14:41.969 { 00:14:41.969 "name": "BaseBdev4", 00:14:41.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.969 "is_configured": false, 00:14:41.969 "data_offset": 0, 00:14:41.969 "data_size": 0 00:14:41.969 } 00:14:41.969 ] 00:14:41.969 }' 00:14:41.969 06:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:41.969 06:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.537 06:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:42.797 [2024-08-14 06:46:09.883485] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:42.797 [2024-08-14 06:46:09.883545] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:42.797 [2024-08-14 06:46:09.883574] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:42.797 [2024-08-14 06:46:09.883931] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:42.797 [2024-08-14 06:46:09.884102] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:42.797 [2024-08-14 06:46:09.884120] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:42.797 [2024-08-14 06:46:09.884361] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.797 BaseBdev4 00:14:42.797 06:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:42.797 06:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:14:42.797 06:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:42.797 06:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:42.797 06:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:42.797 06:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:42.797 06:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:43.057 [ 00:14:43.057 { 00:14:43.057 "name": "BaseBdev4", 00:14:43.057 "aliases": [ 00:14:43.057 "fec0ab9e-df23-4a59-95fd-f13d0b3127b1" 00:14:43.057 ], 00:14:43.057 "product_name": "Malloc disk", 00:14:43.057 "block_size": 512, 00:14:43.057 "num_blocks": 65536, 00:14:43.057 "uuid": "fec0ab9e-df23-4a59-95fd-f13d0b3127b1", 00:14:43.057 "assigned_rate_limits": { 00:14:43.057 "rw_ios_per_sec": 0, 00:14:43.057 "rw_mbytes_per_sec": 0, 00:14:43.057 "r_mbytes_per_sec": 0, 00:14:43.057 "w_mbytes_per_sec": 0 00:14:43.057 }, 00:14:43.057 "claimed": true, 00:14:43.057 "claim_type": "exclusive_write", 00:14:43.057 "zoned": false, 00:14:43.057 "supported_io_types": { 00:14:43.057 "read": true, 00:14:43.057 "write": true, 00:14:43.057 "unmap": true, 00:14:43.057 "flush": true, 00:14:43.057 "reset": true, 00:14:43.057 "nvme_admin": false, 00:14:43.057 "nvme_io": false, 00:14:43.057 "nvme_io_md": false, 00:14:43.057 "write_zeroes": true, 00:14:43.057 "zcopy": true, 00:14:43.057 "get_zone_info": false, 00:14:43.057 "zone_management": false, 00:14:43.057 "zone_append": false, 00:14:43.057 "compare": false, 00:14:43.057 "compare_and_write": false, 00:14:43.057 "abort": true, 00:14:43.057 "seek_hole": false, 00:14:43.057 "seek_data": false, 00:14:43.057 "copy": true, 00:14:43.057 "nvme_iov_md": false 00:14:43.057 }, 00:14:43.057 "memory_domains": [ 00:14:43.057 { 00:14:43.057 "dma_device_id": "system", 00:14:43.057 "dma_device_type": 1 00:14:43.057 }, 00:14:43.057 { 00:14:43.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.057 "dma_device_type": 2 00:14:43.057 } 00:14:43.057 ], 00:14:43.057 "driver_specific": {} 00:14:43.057 } 00:14:43.057 ] 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.057 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.317 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:43.317 "name": "Existed_Raid", 00:14:43.317 "uuid": "a4cc7d89-caf9-49ae-89c9-f34991fc2389", 00:14:43.317 "strip_size_kb": 64, 00:14:43.317 "state": "online", 00:14:43.317 "raid_level": "concat", 00:14:43.317 "superblock": false, 00:14:43.317 "num_base_bdevs": 4, 00:14:43.317 "num_base_bdevs_discovered": 4, 00:14:43.317 "num_base_bdevs_operational": 4, 00:14:43.317 "base_bdevs_list": [ 00:14:43.317 { 00:14:43.317 "name": "BaseBdev1", 00:14:43.317 "uuid": "3c7850d9-bc4f-46db-9e5d-7ad26b4fcc94", 00:14:43.317 "is_configured": true, 00:14:43.317 "data_offset": 0, 00:14:43.317 "data_size": 65536 00:14:43.317 }, 00:14:43.317 { 00:14:43.317 "name": "BaseBdev2", 00:14:43.317 "uuid": "2bb3d484-5f24-435b-a314-d7063e71cd9b", 00:14:43.317 "is_configured": true, 00:14:43.317 "data_offset": 0, 00:14:43.317 "data_size": 65536 00:14:43.317 }, 00:14:43.317 { 00:14:43.317 "name": "BaseBdev3", 00:14:43.317 "uuid": "66d8b913-9bb8-4e7d-b102-fdb695cbdc31", 00:14:43.317 "is_configured": true, 00:14:43.317 "data_offset": 0, 00:14:43.317 "data_size": 65536 00:14:43.317 }, 00:14:43.317 { 00:14:43.317 "name": "BaseBdev4", 00:14:43.317 "uuid": "fec0ab9e-df23-4a59-95fd-f13d0b3127b1", 00:14:43.317 "is_configured": true, 00:14:43.317 "data_offset": 0, 00:14:43.317 "data_size": 65536 00:14:43.317 } 00:14:43.317 ] 00:14:43.317 }' 00:14:43.317 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:43.317 06:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.885 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:43.885 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:43.885 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:43.885 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:43.885 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:43.885 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:43.885 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:43.885 06:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:44.144 [2024-08-14 06:46:11.170023] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.144 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:44.144 "name": "Existed_Raid", 00:14:44.144 "aliases": [ 00:14:44.144 "a4cc7d89-caf9-49ae-89c9-f34991fc2389" 00:14:44.144 ], 00:14:44.144 "product_name": "Raid Volume", 00:14:44.144 "block_size": 512, 00:14:44.144 "num_blocks": 262144, 00:14:44.144 "uuid": "a4cc7d89-caf9-49ae-89c9-f34991fc2389", 00:14:44.144 "assigned_rate_limits": { 00:14:44.144 "rw_ios_per_sec": 0, 00:14:44.144 "rw_mbytes_per_sec": 0, 00:14:44.144 "r_mbytes_per_sec": 0, 00:14:44.144 "w_mbytes_per_sec": 0 00:14:44.144 }, 00:14:44.144 "claimed": false, 00:14:44.144 "zoned": false, 00:14:44.144 "supported_io_types": { 00:14:44.144 "read": true, 00:14:44.144 "write": true, 00:14:44.144 "unmap": true, 00:14:44.144 "flush": true, 00:14:44.144 "reset": true, 00:14:44.144 "nvme_admin": false, 00:14:44.144 "nvme_io": false, 00:14:44.144 "nvme_io_md": false, 00:14:44.144 "write_zeroes": true, 00:14:44.144 "zcopy": false, 00:14:44.144 "get_zone_info": false, 00:14:44.144 "zone_management": false, 00:14:44.144 "zone_append": false, 00:14:44.144 "compare": false, 00:14:44.144 "compare_and_write": false, 00:14:44.144 "abort": false, 00:14:44.144 "seek_hole": false, 00:14:44.144 "seek_data": false, 00:14:44.144 "copy": false, 00:14:44.144 "nvme_iov_md": false 00:14:44.144 }, 00:14:44.144 "memory_domains": [ 00:14:44.144 { 00:14:44.144 "dma_device_id": "system", 00:14:44.144 "dma_device_type": 1 00:14:44.144 }, 00:14:44.144 { 00:14:44.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.144 "dma_device_type": 2 00:14:44.144 }, 00:14:44.144 { 00:14:44.144 "dma_device_id": "system", 00:14:44.144 "dma_device_type": 1 00:14:44.144 }, 00:14:44.144 { 00:14:44.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.144 "dma_device_type": 2 00:14:44.144 }, 00:14:44.144 { 00:14:44.144 "dma_device_id": "system", 00:14:44.144 "dma_device_type": 1 00:14:44.144 }, 00:14:44.144 { 00:14:44.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.144 "dma_device_type": 2 00:14:44.144 }, 00:14:44.144 { 00:14:44.144 "dma_device_id": "system", 00:14:44.144 "dma_device_type": 1 00:14:44.144 }, 00:14:44.144 { 00:14:44.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.144 "dma_device_type": 2 00:14:44.144 } 00:14:44.144 ], 00:14:44.144 "driver_specific": { 00:14:44.144 "raid": { 00:14:44.144 "uuid": "a4cc7d89-caf9-49ae-89c9-f34991fc2389", 00:14:44.144 "strip_size_kb": 64, 00:14:44.144 "state": "online", 00:14:44.144 "raid_level": "concat", 00:14:44.144 "superblock": false, 00:14:44.144 "num_base_bdevs": 4, 00:14:44.144 "num_base_bdevs_discovered": 4, 00:14:44.144 "num_base_bdevs_operational": 4, 00:14:44.144 "base_bdevs_list": [ 00:14:44.144 { 00:14:44.144 "name": "BaseBdev1", 00:14:44.145 "uuid": "3c7850d9-bc4f-46db-9e5d-7ad26b4fcc94", 00:14:44.145 "is_configured": true, 00:14:44.145 "data_offset": 0, 00:14:44.145 "data_size": 65536 00:14:44.145 }, 00:14:44.145 { 00:14:44.145 "name": "BaseBdev2", 00:14:44.145 "uuid": "2bb3d484-5f24-435b-a314-d7063e71cd9b", 00:14:44.145 "is_configured": true, 00:14:44.145 "data_offset": 0, 00:14:44.145 "data_size": 65536 00:14:44.145 }, 00:14:44.145 { 00:14:44.145 "name": "BaseBdev3", 00:14:44.145 "uuid": "66d8b913-9bb8-4e7d-b102-fdb695cbdc31", 00:14:44.145 "is_configured": true, 00:14:44.145 "data_offset": 0, 00:14:44.145 "data_size": 65536 00:14:44.145 }, 00:14:44.145 { 00:14:44.145 "name": "BaseBdev4", 00:14:44.145 "uuid": "fec0ab9e-df23-4a59-95fd-f13d0b3127b1", 00:14:44.145 "is_configured": true, 00:14:44.145 "data_offset": 0, 00:14:44.145 "data_size": 65536 00:14:44.145 } 00:14:44.145 ] 00:14:44.145 } 00:14:44.145 } 00:14:44.145 }' 00:14:44.145 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:44.145 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:44.145 BaseBdev2 00:14:44.145 BaseBdev3 00:14:44.145 BaseBdev4' 00:14:44.145 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.145 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:44.145 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:44.404 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:44.404 "name": "BaseBdev1", 00:14:44.404 "aliases": [ 00:14:44.404 "3c7850d9-bc4f-46db-9e5d-7ad26b4fcc94" 00:14:44.404 ], 00:14:44.404 "product_name": "Malloc disk", 00:14:44.404 "block_size": 512, 00:14:44.404 "num_blocks": 65536, 00:14:44.404 "uuid": "3c7850d9-bc4f-46db-9e5d-7ad26b4fcc94", 00:14:44.404 "assigned_rate_limits": { 00:14:44.404 "rw_ios_per_sec": 0, 00:14:44.404 "rw_mbytes_per_sec": 0, 00:14:44.404 "r_mbytes_per_sec": 0, 00:14:44.404 "w_mbytes_per_sec": 0 00:14:44.404 }, 00:14:44.404 "claimed": true, 00:14:44.404 "claim_type": "exclusive_write", 00:14:44.404 "zoned": false, 00:14:44.404 "supported_io_types": { 00:14:44.404 "read": true, 00:14:44.404 "write": true, 00:14:44.404 "unmap": true, 00:14:44.404 "flush": true, 00:14:44.404 "reset": true, 00:14:44.404 "nvme_admin": false, 00:14:44.404 "nvme_io": false, 00:14:44.404 "nvme_io_md": false, 00:14:44.404 "write_zeroes": true, 00:14:44.404 "zcopy": true, 00:14:44.404 "get_zone_info": false, 00:14:44.404 "zone_management": false, 00:14:44.404 "zone_append": false, 00:14:44.404 "compare": false, 00:14:44.404 "compare_and_write": false, 00:14:44.404 "abort": true, 00:14:44.404 "seek_hole": false, 00:14:44.404 "seek_data": false, 00:14:44.404 "copy": true, 00:14:44.404 "nvme_iov_md": false 00:14:44.404 }, 00:14:44.404 "memory_domains": [ 00:14:44.404 { 00:14:44.404 "dma_device_id": "system", 00:14:44.404 "dma_device_type": 1 00:14:44.404 }, 00:14:44.404 { 00:14:44.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.404 "dma_device_type": 2 00:14:44.404 } 00:14:44.404 ], 00:14:44.404 "driver_specific": {} 00:14:44.404 }' 00:14:44.404 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.404 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.404 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:44.404 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.404 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.404 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:44.404 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.404 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.664 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:44.664 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.664 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.664 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:44.664 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.664 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:44.664 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:44.923 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:44.923 "name": "BaseBdev2", 00:14:44.923 "aliases": [ 00:14:44.923 "2bb3d484-5f24-435b-a314-d7063e71cd9b" 00:14:44.923 ], 00:14:44.923 "product_name": "Malloc disk", 00:14:44.923 "block_size": 512, 00:14:44.923 "num_blocks": 65536, 00:14:44.923 "uuid": "2bb3d484-5f24-435b-a314-d7063e71cd9b", 00:14:44.923 "assigned_rate_limits": { 00:14:44.923 "rw_ios_per_sec": 0, 00:14:44.923 "rw_mbytes_per_sec": 0, 00:14:44.923 "r_mbytes_per_sec": 0, 00:14:44.923 "w_mbytes_per_sec": 0 00:14:44.923 }, 00:14:44.923 "claimed": true, 00:14:44.923 "claim_type": "exclusive_write", 00:14:44.923 "zoned": false, 00:14:44.923 "supported_io_types": { 00:14:44.923 "read": true, 00:14:44.923 "write": true, 00:14:44.923 "unmap": true, 00:14:44.923 "flush": true, 00:14:44.923 "reset": true, 00:14:44.923 "nvme_admin": false, 00:14:44.923 "nvme_io": false, 00:14:44.923 "nvme_io_md": false, 00:14:44.923 "write_zeroes": true, 00:14:44.923 "zcopy": true, 00:14:44.923 "get_zone_info": false, 00:14:44.923 "zone_management": false, 00:14:44.923 "zone_append": false, 00:14:44.923 "compare": false, 00:14:44.923 "compare_and_write": false, 00:14:44.923 "abort": true, 00:14:44.923 "seek_hole": false, 00:14:44.923 "seek_data": false, 00:14:44.923 "copy": true, 00:14:44.923 "nvme_iov_md": false 00:14:44.923 }, 00:14:44.923 "memory_domains": [ 00:14:44.923 { 00:14:44.923 "dma_device_id": "system", 00:14:44.923 "dma_device_type": 1 00:14:44.923 }, 00:14:44.923 { 00:14:44.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.923 "dma_device_type": 2 00:14:44.923 } 00:14:44.923 ], 00:14:44.923 "driver_specific": {} 00:14:44.923 }' 00:14:44.923 06:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.923 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.923 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:44.923 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.923 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.923 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:44.923 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.923 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:45.183 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:45.183 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:45.183 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:45.183 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:45.183 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:45.183 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:45.183 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:45.443 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:45.443 "name": "BaseBdev3", 00:14:45.443 "aliases": [ 00:14:45.443 "66d8b913-9bb8-4e7d-b102-fdb695cbdc31" 00:14:45.443 ], 00:14:45.443 "product_name": "Malloc disk", 00:14:45.443 "block_size": 512, 00:14:45.443 "num_blocks": 65536, 00:14:45.443 "uuid": "66d8b913-9bb8-4e7d-b102-fdb695cbdc31", 00:14:45.443 "assigned_rate_limits": { 00:14:45.443 "rw_ios_per_sec": 0, 00:14:45.443 "rw_mbytes_per_sec": 0, 00:14:45.443 "r_mbytes_per_sec": 0, 00:14:45.443 "w_mbytes_per_sec": 0 00:14:45.443 }, 00:14:45.443 "claimed": true, 00:14:45.443 "claim_type": "exclusive_write", 00:14:45.443 "zoned": false, 00:14:45.443 "supported_io_types": { 00:14:45.443 "read": true, 00:14:45.443 "write": true, 00:14:45.443 "unmap": true, 00:14:45.443 "flush": true, 00:14:45.443 "reset": true, 00:14:45.443 "nvme_admin": false, 00:14:45.443 "nvme_io": false, 00:14:45.443 "nvme_io_md": false, 00:14:45.443 "write_zeroes": true, 00:14:45.443 "zcopy": true, 00:14:45.443 "get_zone_info": false, 00:14:45.443 "zone_management": false, 00:14:45.443 "zone_append": false, 00:14:45.443 "compare": false, 00:14:45.443 "compare_and_write": false, 00:14:45.443 "abort": true, 00:14:45.443 "seek_hole": false, 00:14:45.443 "seek_data": false, 00:14:45.443 "copy": true, 00:14:45.443 "nvme_iov_md": false 00:14:45.443 }, 00:14:45.443 "memory_domains": [ 00:14:45.443 { 00:14:45.444 "dma_device_id": "system", 00:14:45.444 "dma_device_type": 1 00:14:45.444 }, 00:14:45.444 { 00:14:45.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.444 "dma_device_type": 2 00:14:45.444 } 00:14:45.444 ], 00:14:45.444 "driver_specific": {} 00:14:45.444 }' 00:14:45.444 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:45.444 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:45.444 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:45.444 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:45.444 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:45.444 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:45.444 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:45.703 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:45.703 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:45.703 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:45.703 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:45.703 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:45.703 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:45.703 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:45.703 06:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:45.962 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:45.962 "name": "BaseBdev4", 00:14:45.962 "aliases": [ 00:14:45.962 "fec0ab9e-df23-4a59-95fd-f13d0b3127b1" 00:14:45.962 ], 00:14:45.962 "product_name": "Malloc disk", 00:14:45.962 "block_size": 512, 00:14:45.962 "num_blocks": 65536, 00:14:45.962 "uuid": "fec0ab9e-df23-4a59-95fd-f13d0b3127b1", 00:14:45.962 "assigned_rate_limits": { 00:14:45.962 "rw_ios_per_sec": 0, 00:14:45.962 "rw_mbytes_per_sec": 0, 00:14:45.962 "r_mbytes_per_sec": 0, 00:14:45.962 "w_mbytes_per_sec": 0 00:14:45.962 }, 00:14:45.962 "claimed": true, 00:14:45.962 "claim_type": "exclusive_write", 00:14:45.962 "zoned": false, 00:14:45.962 "supported_io_types": { 00:14:45.962 "read": true, 00:14:45.962 "write": true, 00:14:45.962 "unmap": true, 00:14:45.962 "flush": true, 00:14:45.962 "reset": true, 00:14:45.962 "nvme_admin": false, 00:14:45.962 "nvme_io": false, 00:14:45.962 "nvme_io_md": false, 00:14:45.962 "write_zeroes": true, 00:14:45.962 "zcopy": true, 00:14:45.962 "get_zone_info": false, 00:14:45.962 "zone_management": false, 00:14:45.962 "zone_append": false, 00:14:45.962 "compare": false, 00:14:45.962 "compare_and_write": false, 00:14:45.962 "abort": true, 00:14:45.962 "seek_hole": false, 00:14:45.962 "seek_data": false, 00:14:45.962 "copy": true, 00:14:45.962 "nvme_iov_md": false 00:14:45.962 }, 00:14:45.962 "memory_domains": [ 00:14:45.962 { 00:14:45.962 "dma_device_id": "system", 00:14:45.962 "dma_device_type": 1 00:14:45.962 }, 00:14:45.962 { 00:14:45.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.962 "dma_device_type": 2 00:14:45.962 } 00:14:45.962 ], 00:14:45.962 "driver_specific": {} 00:14:45.962 }' 00:14:45.962 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:45.962 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:45.962 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:45.962 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:45.962 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:46.222 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:46.222 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:46.222 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:46.222 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:46.222 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:46.222 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:46.222 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:46.222 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:46.481 [2024-08-14 06:46:13.553880] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:46.481 [2024-08-14 06:46:13.553942] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:46.481 [2024-08-14 06:46:13.554013] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.481 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.740 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:46.740 "name": "Existed_Raid", 00:14:46.740 "uuid": "a4cc7d89-caf9-49ae-89c9-f34991fc2389", 00:14:46.740 "strip_size_kb": 64, 00:14:46.740 "state": "offline", 00:14:46.740 "raid_level": "concat", 00:14:46.740 "superblock": false, 00:14:46.740 "num_base_bdevs": 4, 00:14:46.740 "num_base_bdevs_discovered": 3, 00:14:46.740 "num_base_bdevs_operational": 3, 00:14:46.740 "base_bdevs_list": [ 00:14:46.740 { 00:14:46.740 "name": null, 00:14:46.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.740 "is_configured": false, 00:14:46.740 "data_offset": 0, 00:14:46.740 "data_size": 65536 00:14:46.740 }, 00:14:46.740 { 00:14:46.740 "name": "BaseBdev2", 00:14:46.740 "uuid": "2bb3d484-5f24-435b-a314-d7063e71cd9b", 00:14:46.740 "is_configured": true, 00:14:46.740 "data_offset": 0, 00:14:46.740 "data_size": 65536 00:14:46.740 }, 00:14:46.740 { 00:14:46.740 "name": "BaseBdev3", 00:14:46.740 "uuid": "66d8b913-9bb8-4e7d-b102-fdb695cbdc31", 00:14:46.740 "is_configured": true, 00:14:46.740 "data_offset": 0, 00:14:46.740 "data_size": 65536 00:14:46.740 }, 00:14:46.740 { 00:14:46.740 "name": "BaseBdev4", 00:14:46.740 "uuid": "fec0ab9e-df23-4a59-95fd-f13d0b3127b1", 00:14:46.740 "is_configured": true, 00:14:46.740 "data_offset": 0, 00:14:46.740 "data_size": 65536 00:14:46.740 } 00:14:46.740 ] 00:14:46.740 }' 00:14:46.740 06:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:46.740 06:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.309 06:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:47.309 06:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:47.309 06:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.309 06:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:47.309 06:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:47.309 06:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:47.309 06:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:47.568 [2024-08-14 06:46:14.665447] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:47.568 06:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:47.568 06:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:47.568 06:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.568 06:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:47.828 06:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:47.828 06:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:47.828 06:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:48.088 [2024-08-14 06:46:15.097894] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:48.088 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:48.088 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:48.088 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.088 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:48.347 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:48.347 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.347 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:48.347 [2024-08-14 06:46:15.522578] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:48.347 [2024-08-14 06:46:15.522671] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:48.347 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:48.347 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:48.347 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.347 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:48.607 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:48.608 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:48.608 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:14:48.608 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:48.608 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:48.608 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.867 BaseBdev2 00:14:48.867 06:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:48.867 06:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:48.867 06:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:48.867 06:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:48.867 06:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:48.867 06:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:48.867 06:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.126 06:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:49.126 [ 00:14:49.126 { 00:14:49.126 "name": "BaseBdev2", 00:14:49.126 "aliases": [ 00:14:49.126 "eb96bc64-abd0-41b4-afe1-d409a8f56f44" 00:14:49.126 ], 00:14:49.126 "product_name": "Malloc disk", 00:14:49.126 "block_size": 512, 00:14:49.126 "num_blocks": 65536, 00:14:49.126 "uuid": "eb96bc64-abd0-41b4-afe1-d409a8f56f44", 00:14:49.126 "assigned_rate_limits": { 00:14:49.126 "rw_ios_per_sec": 0, 00:14:49.126 "rw_mbytes_per_sec": 0, 00:14:49.126 "r_mbytes_per_sec": 0, 00:14:49.126 "w_mbytes_per_sec": 0 00:14:49.126 }, 00:14:49.126 "claimed": false, 00:14:49.126 "zoned": false, 00:14:49.126 "supported_io_types": { 00:14:49.126 "read": true, 00:14:49.126 "write": true, 00:14:49.126 "unmap": true, 00:14:49.126 "flush": true, 00:14:49.126 "reset": true, 00:14:49.126 "nvme_admin": false, 00:14:49.126 "nvme_io": false, 00:14:49.126 "nvme_io_md": false, 00:14:49.126 "write_zeroes": true, 00:14:49.126 "zcopy": true, 00:14:49.126 "get_zone_info": false, 00:14:49.126 "zone_management": false, 00:14:49.126 "zone_append": false, 00:14:49.126 "compare": false, 00:14:49.126 "compare_and_write": false, 00:14:49.126 "abort": true, 00:14:49.126 "seek_hole": false, 00:14:49.126 "seek_data": false, 00:14:49.126 "copy": true, 00:14:49.126 "nvme_iov_md": false 00:14:49.127 }, 00:14:49.127 "memory_domains": [ 00:14:49.127 { 00:14:49.127 "dma_device_id": "system", 00:14:49.127 "dma_device_type": 1 00:14:49.127 }, 00:14:49.127 { 00:14:49.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.127 "dma_device_type": 2 00:14:49.127 } 00:14:49.127 ], 00:14:49.127 "driver_specific": {} 00:14:49.127 } 00:14:49.127 ] 00:14:49.127 06:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:49.127 06:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:49.127 06:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:49.127 06:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:49.386 BaseBdev3 00:14:49.386 06:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:49.386 06:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:49.386 06:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:49.386 06:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:49.386 06:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:49.386 06:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:49.386 06:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.646 06:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:49.906 [ 00:14:49.906 { 00:14:49.906 "name": "BaseBdev3", 00:14:49.906 "aliases": [ 00:14:49.906 "f10134a1-7def-4ff9-a494-e9f525fe97a1" 00:14:49.906 ], 00:14:49.906 "product_name": "Malloc disk", 00:14:49.906 "block_size": 512, 00:14:49.906 "num_blocks": 65536, 00:14:49.906 "uuid": "f10134a1-7def-4ff9-a494-e9f525fe97a1", 00:14:49.906 "assigned_rate_limits": { 00:14:49.906 "rw_ios_per_sec": 0, 00:14:49.906 "rw_mbytes_per_sec": 0, 00:14:49.906 "r_mbytes_per_sec": 0, 00:14:49.906 "w_mbytes_per_sec": 0 00:14:49.906 }, 00:14:49.906 "claimed": false, 00:14:49.906 "zoned": false, 00:14:49.906 "supported_io_types": { 00:14:49.906 "read": true, 00:14:49.906 "write": true, 00:14:49.906 "unmap": true, 00:14:49.906 "flush": true, 00:14:49.906 "reset": true, 00:14:49.906 "nvme_admin": false, 00:14:49.906 "nvme_io": false, 00:14:49.906 "nvme_io_md": false, 00:14:49.906 "write_zeroes": true, 00:14:49.906 "zcopy": true, 00:14:49.906 "get_zone_info": false, 00:14:49.906 "zone_management": false, 00:14:49.906 "zone_append": false, 00:14:49.906 "compare": false, 00:14:49.906 "compare_and_write": false, 00:14:49.906 "abort": true, 00:14:49.906 "seek_hole": false, 00:14:49.906 "seek_data": false, 00:14:49.906 "copy": true, 00:14:49.906 "nvme_iov_md": false 00:14:49.906 }, 00:14:49.906 "memory_domains": [ 00:14:49.906 { 00:14:49.906 "dma_device_id": "system", 00:14:49.906 "dma_device_type": 1 00:14:49.906 }, 00:14:49.906 { 00:14:49.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.906 "dma_device_type": 2 00:14:49.906 } 00:14:49.906 ], 00:14:49.906 "driver_specific": {} 00:14:49.906 } 00:14:49.906 ] 00:14:49.906 06:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:49.906 06:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:49.906 06:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:49.906 06:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:49.906 BaseBdev4 00:14:49.906 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:14:49.906 06:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:14:49.906 06:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:49.906 06:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:49.906 06:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:49.906 06:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:49.906 06:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:50.166 06:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:50.426 [ 00:14:50.426 { 00:14:50.426 "name": "BaseBdev4", 00:14:50.426 "aliases": [ 00:14:50.426 "6e247a4f-5c99-4a93-8d0d-5f53ab83b4cc" 00:14:50.426 ], 00:14:50.426 "product_name": "Malloc disk", 00:14:50.426 "block_size": 512, 00:14:50.426 "num_blocks": 65536, 00:14:50.426 "uuid": "6e247a4f-5c99-4a93-8d0d-5f53ab83b4cc", 00:14:50.426 "assigned_rate_limits": { 00:14:50.426 "rw_ios_per_sec": 0, 00:14:50.426 "rw_mbytes_per_sec": 0, 00:14:50.426 "r_mbytes_per_sec": 0, 00:14:50.426 "w_mbytes_per_sec": 0 00:14:50.426 }, 00:14:50.426 "claimed": false, 00:14:50.426 "zoned": false, 00:14:50.426 "supported_io_types": { 00:14:50.426 "read": true, 00:14:50.426 "write": true, 00:14:50.426 "unmap": true, 00:14:50.426 "flush": true, 00:14:50.426 "reset": true, 00:14:50.426 "nvme_admin": false, 00:14:50.426 "nvme_io": false, 00:14:50.426 "nvme_io_md": false, 00:14:50.426 "write_zeroes": true, 00:14:50.426 "zcopy": true, 00:14:50.426 "get_zone_info": false, 00:14:50.426 "zone_management": false, 00:14:50.426 "zone_append": false, 00:14:50.426 "compare": false, 00:14:50.426 "compare_and_write": false, 00:14:50.426 "abort": true, 00:14:50.426 "seek_hole": false, 00:14:50.426 "seek_data": false, 00:14:50.426 "copy": true, 00:14:50.426 "nvme_iov_md": false 00:14:50.426 }, 00:14:50.426 "memory_domains": [ 00:14:50.426 { 00:14:50.426 "dma_device_id": "system", 00:14:50.426 "dma_device_type": 1 00:14:50.426 }, 00:14:50.426 { 00:14:50.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.427 "dma_device_type": 2 00:14:50.427 } 00:14:50.427 ], 00:14:50.427 "driver_specific": {} 00:14:50.427 } 00:14:50.427 ] 00:14:50.427 06:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:50.427 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:50.427 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:50.427 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:50.686 [2024-08-14 06:46:17.707569] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.687 [2024-08-14 06:46:17.707775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.687 [2024-08-14 06:46:17.707831] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.687 [2024-08-14 06:46:17.710302] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.687 [2024-08-14 06:46:17.710432] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:50.687 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:50.687 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:50.687 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:50.687 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:50.687 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:50.687 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:50.687 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.687 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.687 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.687 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.687 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.687 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.687 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:50.687 "name": "Existed_Raid", 00:14:50.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.687 "strip_size_kb": 64, 00:14:50.687 "state": "configuring", 00:14:50.687 "raid_level": "concat", 00:14:50.687 "superblock": false, 00:14:50.687 "num_base_bdevs": 4, 00:14:50.687 "num_base_bdevs_discovered": 3, 00:14:50.687 "num_base_bdevs_operational": 4, 00:14:50.687 "base_bdevs_list": [ 00:14:50.687 { 00:14:50.687 "name": "BaseBdev1", 00:14:50.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.687 "is_configured": false, 00:14:50.687 "data_offset": 0, 00:14:50.687 "data_size": 0 00:14:50.687 }, 00:14:50.687 { 00:14:50.687 "name": "BaseBdev2", 00:14:50.687 "uuid": "eb96bc64-abd0-41b4-afe1-d409a8f56f44", 00:14:50.687 "is_configured": true, 00:14:50.687 "data_offset": 0, 00:14:50.687 "data_size": 65536 00:14:50.687 }, 00:14:50.687 { 00:14:50.687 "name": "BaseBdev3", 00:14:50.687 "uuid": "f10134a1-7def-4ff9-a494-e9f525fe97a1", 00:14:50.687 "is_configured": true, 00:14:50.687 "data_offset": 0, 00:14:50.687 "data_size": 65536 00:14:50.687 }, 00:14:50.687 { 00:14:50.687 "name": "BaseBdev4", 00:14:50.687 "uuid": "6e247a4f-5c99-4a93-8d0d-5f53ab83b4cc", 00:14:50.687 "is_configured": true, 00:14:50.687 "data_offset": 0, 00:14:50.687 "data_size": 65536 00:14:50.687 } 00:14:50.687 ] 00:14:50.687 }' 00:14:50.687 06:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:50.687 06:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.257 06:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:51.517 [2024-08-14 06:46:18.630055] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.517 06:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:51.517 06:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:51.517 06:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:51.517 06:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:51.517 06:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:51.517 06:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:51.517 06:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.517 06:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.517 06:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.517 06:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.517 06:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.517 06:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.777 06:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.777 "name": "Existed_Raid", 00:14:51.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.777 "strip_size_kb": 64, 00:14:51.777 "state": "configuring", 00:14:51.777 "raid_level": "concat", 00:14:51.777 "superblock": false, 00:14:51.777 "num_base_bdevs": 4, 00:14:51.777 "num_base_bdevs_discovered": 2, 00:14:51.777 "num_base_bdevs_operational": 4, 00:14:51.777 "base_bdevs_list": [ 00:14:51.777 { 00:14:51.777 "name": "BaseBdev1", 00:14:51.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.777 "is_configured": false, 00:14:51.777 "data_offset": 0, 00:14:51.777 "data_size": 0 00:14:51.777 }, 00:14:51.777 { 00:14:51.777 "name": null, 00:14:51.777 "uuid": "eb96bc64-abd0-41b4-afe1-d409a8f56f44", 00:14:51.777 "is_configured": false, 00:14:51.777 "data_offset": 0, 00:14:51.777 "data_size": 65536 00:14:51.777 }, 00:14:51.777 { 00:14:51.777 "name": "BaseBdev3", 00:14:51.777 "uuid": "f10134a1-7def-4ff9-a494-e9f525fe97a1", 00:14:51.777 "is_configured": true, 00:14:51.777 "data_offset": 0, 00:14:51.777 "data_size": 65536 00:14:51.777 }, 00:14:51.777 { 00:14:51.777 "name": "BaseBdev4", 00:14:51.777 "uuid": "6e247a4f-5c99-4a93-8d0d-5f53ab83b4cc", 00:14:51.777 "is_configured": true, 00:14:51.777 "data_offset": 0, 00:14:51.777 "data_size": 65536 00:14:51.777 } 00:14:51.777 ] 00:14:51.777 }' 00:14:51.777 06:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.777 06:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.346 06:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.346 06:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:52.606 06:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:52.606 06:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:52.606 [2024-08-14 06:46:19.797459] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.606 BaseBdev1 00:14:52.606 06:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:52.606 06:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:52.606 06:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:52.606 06:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:52.606 06:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:52.606 06:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:52.606 06:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:52.869 06:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:53.142 [ 00:14:53.142 { 00:14:53.142 "name": "BaseBdev1", 00:14:53.142 "aliases": [ 00:14:53.142 "77cdf2b4-088f-42c9-b57c-28e56b3b853f" 00:14:53.142 ], 00:14:53.142 "product_name": "Malloc disk", 00:14:53.142 "block_size": 512, 00:14:53.142 "num_blocks": 65536, 00:14:53.142 "uuid": "77cdf2b4-088f-42c9-b57c-28e56b3b853f", 00:14:53.142 "assigned_rate_limits": { 00:14:53.142 "rw_ios_per_sec": 0, 00:14:53.142 "rw_mbytes_per_sec": 0, 00:14:53.142 "r_mbytes_per_sec": 0, 00:14:53.142 "w_mbytes_per_sec": 0 00:14:53.142 }, 00:14:53.142 "claimed": true, 00:14:53.142 "claim_type": "exclusive_write", 00:14:53.142 "zoned": false, 00:14:53.142 "supported_io_types": { 00:14:53.142 "read": true, 00:14:53.142 "write": true, 00:14:53.142 "unmap": true, 00:14:53.142 "flush": true, 00:14:53.142 "reset": true, 00:14:53.142 "nvme_admin": false, 00:14:53.142 "nvme_io": false, 00:14:53.142 "nvme_io_md": false, 00:14:53.142 "write_zeroes": true, 00:14:53.142 "zcopy": true, 00:14:53.142 "get_zone_info": false, 00:14:53.142 "zone_management": false, 00:14:53.142 "zone_append": false, 00:14:53.142 "compare": false, 00:14:53.142 "compare_and_write": false, 00:14:53.142 "abort": true, 00:14:53.142 "seek_hole": false, 00:14:53.142 "seek_data": false, 00:14:53.142 "copy": true, 00:14:53.142 "nvme_iov_md": false 00:14:53.142 }, 00:14:53.142 "memory_domains": [ 00:14:53.142 { 00:14:53.142 "dma_device_id": "system", 00:14:53.142 "dma_device_type": 1 00:14:53.142 }, 00:14:53.142 { 00:14:53.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.142 "dma_device_type": 2 00:14:53.142 } 00:14:53.142 ], 00:14:53.142 "driver_specific": {} 00:14:53.142 } 00:14:53.142 ] 00:14:53.142 06:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:53.142 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:53.142 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:53.142 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:53.142 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:53.142 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:53.142 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:53.142 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:53.142 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:53.142 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:53.142 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:53.142 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.142 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.414 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:53.414 "name": "Existed_Raid", 00:14:53.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.414 "strip_size_kb": 64, 00:14:53.414 "state": "configuring", 00:14:53.414 "raid_level": "concat", 00:14:53.414 "superblock": false, 00:14:53.414 "num_base_bdevs": 4, 00:14:53.414 "num_base_bdevs_discovered": 3, 00:14:53.414 "num_base_bdevs_operational": 4, 00:14:53.414 "base_bdevs_list": [ 00:14:53.414 { 00:14:53.414 "name": "BaseBdev1", 00:14:53.414 "uuid": "77cdf2b4-088f-42c9-b57c-28e56b3b853f", 00:14:53.414 "is_configured": true, 00:14:53.414 "data_offset": 0, 00:14:53.414 "data_size": 65536 00:14:53.414 }, 00:14:53.414 { 00:14:53.414 "name": null, 00:14:53.414 "uuid": "eb96bc64-abd0-41b4-afe1-d409a8f56f44", 00:14:53.414 "is_configured": false, 00:14:53.414 "data_offset": 0, 00:14:53.414 "data_size": 65536 00:14:53.414 }, 00:14:53.414 { 00:14:53.414 "name": "BaseBdev3", 00:14:53.414 "uuid": "f10134a1-7def-4ff9-a494-e9f525fe97a1", 00:14:53.414 "is_configured": true, 00:14:53.414 "data_offset": 0, 00:14:53.414 "data_size": 65536 00:14:53.414 }, 00:14:53.414 { 00:14:53.414 "name": "BaseBdev4", 00:14:53.414 "uuid": "6e247a4f-5c99-4a93-8d0d-5f53ab83b4cc", 00:14:53.414 "is_configured": true, 00:14:53.414 "data_offset": 0, 00:14:53.414 "data_size": 65536 00:14:53.414 } 00:14:53.414 ] 00:14:53.414 }' 00:14:53.414 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:53.414 06:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.983 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.983 06:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:53.983 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:53.983 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:54.243 [2024-08-14 06:46:21.343245] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:54.243 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:54.243 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:54.243 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:54.243 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:54.243 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:54.243 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:54.243 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.243 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.243 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.243 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.243 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.243 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.502 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:54.502 "name": "Existed_Raid", 00:14:54.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.502 "strip_size_kb": 64, 00:14:54.502 "state": "configuring", 00:14:54.502 "raid_level": "concat", 00:14:54.502 "superblock": false, 00:14:54.502 "num_base_bdevs": 4, 00:14:54.502 "num_base_bdevs_discovered": 2, 00:14:54.502 "num_base_bdevs_operational": 4, 00:14:54.502 "base_bdevs_list": [ 00:14:54.502 { 00:14:54.502 "name": "BaseBdev1", 00:14:54.503 "uuid": "77cdf2b4-088f-42c9-b57c-28e56b3b853f", 00:14:54.503 "is_configured": true, 00:14:54.503 "data_offset": 0, 00:14:54.503 "data_size": 65536 00:14:54.503 }, 00:14:54.503 { 00:14:54.503 "name": null, 00:14:54.503 "uuid": "eb96bc64-abd0-41b4-afe1-d409a8f56f44", 00:14:54.503 "is_configured": false, 00:14:54.503 "data_offset": 0, 00:14:54.503 "data_size": 65536 00:14:54.503 }, 00:14:54.503 { 00:14:54.503 "name": null, 00:14:54.503 "uuid": "f10134a1-7def-4ff9-a494-e9f525fe97a1", 00:14:54.503 "is_configured": false, 00:14:54.503 "data_offset": 0, 00:14:54.503 "data_size": 65536 00:14:54.503 }, 00:14:54.503 { 00:14:54.503 "name": "BaseBdev4", 00:14:54.503 "uuid": "6e247a4f-5c99-4a93-8d0d-5f53ab83b4cc", 00:14:54.503 "is_configured": true, 00:14:54.503 "data_offset": 0, 00:14:54.503 "data_size": 65536 00:14:54.503 } 00:14:54.503 ] 00:14:54.503 }' 00:14:54.503 06:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:54.503 06:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.072 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.072 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:55.072 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:55.072 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:55.332 [2024-08-14 06:46:22.421607] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.332 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:55.332 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:55.332 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:55.332 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:55.332 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:55.332 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:55.332 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.332 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.332 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.332 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.332 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.332 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.592 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.592 "name": "Existed_Raid", 00:14:55.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.592 "strip_size_kb": 64, 00:14:55.592 "state": "configuring", 00:14:55.592 "raid_level": "concat", 00:14:55.592 "superblock": false, 00:14:55.592 "num_base_bdevs": 4, 00:14:55.592 "num_base_bdevs_discovered": 3, 00:14:55.592 "num_base_bdevs_operational": 4, 00:14:55.592 "base_bdevs_list": [ 00:14:55.592 { 00:14:55.592 "name": "BaseBdev1", 00:14:55.592 "uuid": "77cdf2b4-088f-42c9-b57c-28e56b3b853f", 00:14:55.592 "is_configured": true, 00:14:55.592 "data_offset": 0, 00:14:55.592 "data_size": 65536 00:14:55.592 }, 00:14:55.592 { 00:14:55.592 "name": null, 00:14:55.592 "uuid": "eb96bc64-abd0-41b4-afe1-d409a8f56f44", 00:14:55.592 "is_configured": false, 00:14:55.592 "data_offset": 0, 00:14:55.592 "data_size": 65536 00:14:55.592 }, 00:14:55.592 { 00:14:55.592 "name": "BaseBdev3", 00:14:55.592 "uuid": "f10134a1-7def-4ff9-a494-e9f525fe97a1", 00:14:55.592 "is_configured": true, 00:14:55.592 "data_offset": 0, 00:14:55.592 "data_size": 65536 00:14:55.592 }, 00:14:55.592 { 00:14:55.592 "name": "BaseBdev4", 00:14:55.592 "uuid": "6e247a4f-5c99-4a93-8d0d-5f53ab83b4cc", 00:14:55.592 "is_configured": true, 00:14:55.592 "data_offset": 0, 00:14:55.592 "data_size": 65536 00:14:55.592 } 00:14:55.592 ] 00:14:55.592 }' 00:14:55.592 06:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.592 06:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.162 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:56.162 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.162 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:56.162 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:56.421 [2024-08-14 06:46:23.503787] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.421 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:56.421 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:56.421 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:56.421 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:56.421 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:56.421 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:56.421 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:56.421 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:56.421 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:56.421 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:56.421 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.421 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.679 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:56.679 "name": "Existed_Raid", 00:14:56.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.679 "strip_size_kb": 64, 00:14:56.679 "state": "configuring", 00:14:56.680 "raid_level": "concat", 00:14:56.680 "superblock": false, 00:14:56.680 "num_base_bdevs": 4, 00:14:56.680 "num_base_bdevs_discovered": 2, 00:14:56.680 "num_base_bdevs_operational": 4, 00:14:56.680 "base_bdevs_list": [ 00:14:56.680 { 00:14:56.680 "name": null, 00:14:56.680 "uuid": "77cdf2b4-088f-42c9-b57c-28e56b3b853f", 00:14:56.680 "is_configured": false, 00:14:56.680 "data_offset": 0, 00:14:56.680 "data_size": 65536 00:14:56.680 }, 00:14:56.680 { 00:14:56.680 "name": null, 00:14:56.680 "uuid": "eb96bc64-abd0-41b4-afe1-d409a8f56f44", 00:14:56.680 "is_configured": false, 00:14:56.680 "data_offset": 0, 00:14:56.680 "data_size": 65536 00:14:56.680 }, 00:14:56.680 { 00:14:56.680 "name": "BaseBdev3", 00:14:56.680 "uuid": "f10134a1-7def-4ff9-a494-e9f525fe97a1", 00:14:56.680 "is_configured": true, 00:14:56.680 "data_offset": 0, 00:14:56.680 "data_size": 65536 00:14:56.680 }, 00:14:56.680 { 00:14:56.680 "name": "BaseBdev4", 00:14:56.680 "uuid": "6e247a4f-5c99-4a93-8d0d-5f53ab83b4cc", 00:14:56.680 "is_configured": true, 00:14:56.680 "data_offset": 0, 00:14:56.680 "data_size": 65536 00:14:56.680 } 00:14:56.680 ] 00:14:56.680 }' 00:14:56.680 06:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:56.680 06:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.248 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:57.248 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.248 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:57.248 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:57.507 [2024-08-14 06:46:24.618456] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.507 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:57.508 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:57.508 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:57.508 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:57.508 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:57.508 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:57.508 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.508 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.508 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.508 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.508 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.508 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.767 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.767 "name": "Existed_Raid", 00:14:57.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.767 "strip_size_kb": 64, 00:14:57.767 "state": "configuring", 00:14:57.767 "raid_level": "concat", 00:14:57.767 "superblock": false, 00:14:57.767 "num_base_bdevs": 4, 00:14:57.767 "num_base_bdevs_discovered": 3, 00:14:57.767 "num_base_bdevs_operational": 4, 00:14:57.767 "base_bdevs_list": [ 00:14:57.767 { 00:14:57.767 "name": null, 00:14:57.767 "uuid": "77cdf2b4-088f-42c9-b57c-28e56b3b853f", 00:14:57.767 "is_configured": false, 00:14:57.767 "data_offset": 0, 00:14:57.767 "data_size": 65536 00:14:57.767 }, 00:14:57.767 { 00:14:57.767 "name": "BaseBdev2", 00:14:57.767 "uuid": "eb96bc64-abd0-41b4-afe1-d409a8f56f44", 00:14:57.767 "is_configured": true, 00:14:57.767 "data_offset": 0, 00:14:57.767 "data_size": 65536 00:14:57.767 }, 00:14:57.767 { 00:14:57.767 "name": "BaseBdev3", 00:14:57.767 "uuid": "f10134a1-7def-4ff9-a494-e9f525fe97a1", 00:14:57.767 "is_configured": true, 00:14:57.767 "data_offset": 0, 00:14:57.767 "data_size": 65536 00:14:57.767 }, 00:14:57.767 { 00:14:57.767 "name": "BaseBdev4", 00:14:57.767 "uuid": "6e247a4f-5c99-4a93-8d0d-5f53ab83b4cc", 00:14:57.767 "is_configured": true, 00:14:57.767 "data_offset": 0, 00:14:57.767 "data_size": 65536 00:14:57.767 } 00:14:57.767 ] 00:14:57.767 }' 00:14:57.767 06:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.767 06:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.337 06:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.337 06:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:58.337 06:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:58.337 06:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:58.337 06:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.596 06:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 77cdf2b4-088f-42c9-b57c-28e56b3b853f 00:14:58.855 [2024-08-14 06:46:25.933835] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:58.855 [2024-08-14 06:46:25.933894] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:58.855 [2024-08-14 06:46:25.933906] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:58.855 [2024-08-14 06:46:25.934249] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:14:58.855 [2024-08-14 06:46:25.934400] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:58.855 [2024-08-14 06:46:25.934410] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:58.855 [2024-08-14 06:46:25.934638] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.855 NewBaseBdev 00:14:58.855 06:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:58.855 06:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:14:58.855 06:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:58.855 06:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:58.855 06:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:58.856 06:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:58.856 06:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:59.114 06:46:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:59.114 [ 00:14:59.114 { 00:14:59.114 "name": "NewBaseBdev", 00:14:59.114 "aliases": [ 00:14:59.114 "77cdf2b4-088f-42c9-b57c-28e56b3b853f" 00:14:59.114 ], 00:14:59.114 "product_name": "Malloc disk", 00:14:59.114 "block_size": 512, 00:14:59.114 "num_blocks": 65536, 00:14:59.114 "uuid": "77cdf2b4-088f-42c9-b57c-28e56b3b853f", 00:14:59.114 "assigned_rate_limits": { 00:14:59.114 "rw_ios_per_sec": 0, 00:14:59.114 "rw_mbytes_per_sec": 0, 00:14:59.114 "r_mbytes_per_sec": 0, 00:14:59.114 "w_mbytes_per_sec": 0 00:14:59.114 }, 00:14:59.114 "claimed": true, 00:14:59.114 "claim_type": "exclusive_write", 00:14:59.114 "zoned": false, 00:14:59.114 "supported_io_types": { 00:14:59.114 "read": true, 00:14:59.114 "write": true, 00:14:59.114 "unmap": true, 00:14:59.114 "flush": true, 00:14:59.114 "reset": true, 00:14:59.114 "nvme_admin": false, 00:14:59.114 "nvme_io": false, 00:14:59.114 "nvme_io_md": false, 00:14:59.114 "write_zeroes": true, 00:14:59.114 "zcopy": true, 00:14:59.114 "get_zone_info": false, 00:14:59.114 "zone_management": false, 00:14:59.114 "zone_append": false, 00:14:59.114 "compare": false, 00:14:59.114 "compare_and_write": false, 00:14:59.114 "abort": true, 00:14:59.114 "seek_hole": false, 00:14:59.114 "seek_data": false, 00:14:59.114 "copy": true, 00:14:59.114 "nvme_iov_md": false 00:14:59.114 }, 00:14:59.114 "memory_domains": [ 00:14:59.114 { 00:14:59.114 "dma_device_id": "system", 00:14:59.114 "dma_device_type": 1 00:14:59.114 }, 00:14:59.114 { 00:14:59.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.114 "dma_device_type": 2 00:14:59.114 } 00:14:59.114 ], 00:14:59.114 "driver_specific": {} 00:14:59.114 } 00:14:59.114 ] 00:14:59.114 06:46:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:59.114 06:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:59.114 06:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:59.114 06:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:59.114 06:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:59.114 06:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:59.114 06:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:59.114 06:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.114 06:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.114 06:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.114 06:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.114 06:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.114 06:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.373 06:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.373 "name": "Existed_Raid", 00:14:59.373 "uuid": "e9da50be-a61a-4652-a13e-d0fbfb00e70b", 00:14:59.373 "strip_size_kb": 64, 00:14:59.373 "state": "online", 00:14:59.373 "raid_level": "concat", 00:14:59.373 "superblock": false, 00:14:59.373 "num_base_bdevs": 4, 00:14:59.373 "num_base_bdevs_discovered": 4, 00:14:59.373 "num_base_bdevs_operational": 4, 00:14:59.373 "base_bdevs_list": [ 00:14:59.373 { 00:14:59.373 "name": "NewBaseBdev", 00:14:59.373 "uuid": "77cdf2b4-088f-42c9-b57c-28e56b3b853f", 00:14:59.373 "is_configured": true, 00:14:59.373 "data_offset": 0, 00:14:59.373 "data_size": 65536 00:14:59.373 }, 00:14:59.373 { 00:14:59.373 "name": "BaseBdev2", 00:14:59.373 "uuid": "eb96bc64-abd0-41b4-afe1-d409a8f56f44", 00:14:59.373 "is_configured": true, 00:14:59.373 "data_offset": 0, 00:14:59.373 "data_size": 65536 00:14:59.373 }, 00:14:59.373 { 00:14:59.373 "name": "BaseBdev3", 00:14:59.373 "uuid": "f10134a1-7def-4ff9-a494-e9f525fe97a1", 00:14:59.373 "is_configured": true, 00:14:59.373 "data_offset": 0, 00:14:59.373 "data_size": 65536 00:14:59.373 }, 00:14:59.373 { 00:14:59.373 "name": "BaseBdev4", 00:14:59.373 "uuid": "6e247a4f-5c99-4a93-8d0d-5f53ab83b4cc", 00:14:59.373 "is_configured": true, 00:14:59.373 "data_offset": 0, 00:14:59.373 "data_size": 65536 00:14:59.373 } 00:14:59.373 ] 00:14:59.373 }' 00:14:59.373 06:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.373 06:46:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.940 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:59.940 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:59.940 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:59.940 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:59.940 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:59.940 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:59.940 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:59.940 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:00.199 [2024-08-14 06:46:27.248149] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.199 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:00.199 "name": "Existed_Raid", 00:15:00.199 "aliases": [ 00:15:00.199 "e9da50be-a61a-4652-a13e-d0fbfb00e70b" 00:15:00.199 ], 00:15:00.199 "product_name": "Raid Volume", 00:15:00.199 "block_size": 512, 00:15:00.199 "num_blocks": 262144, 00:15:00.199 "uuid": "e9da50be-a61a-4652-a13e-d0fbfb00e70b", 00:15:00.199 "assigned_rate_limits": { 00:15:00.199 "rw_ios_per_sec": 0, 00:15:00.199 "rw_mbytes_per_sec": 0, 00:15:00.199 "r_mbytes_per_sec": 0, 00:15:00.199 "w_mbytes_per_sec": 0 00:15:00.199 }, 00:15:00.199 "claimed": false, 00:15:00.199 "zoned": false, 00:15:00.199 "supported_io_types": { 00:15:00.199 "read": true, 00:15:00.199 "write": true, 00:15:00.199 "unmap": true, 00:15:00.199 "flush": true, 00:15:00.199 "reset": true, 00:15:00.199 "nvme_admin": false, 00:15:00.199 "nvme_io": false, 00:15:00.199 "nvme_io_md": false, 00:15:00.199 "write_zeroes": true, 00:15:00.199 "zcopy": false, 00:15:00.199 "get_zone_info": false, 00:15:00.199 "zone_management": false, 00:15:00.199 "zone_append": false, 00:15:00.199 "compare": false, 00:15:00.199 "compare_and_write": false, 00:15:00.199 "abort": false, 00:15:00.199 "seek_hole": false, 00:15:00.199 "seek_data": false, 00:15:00.199 "copy": false, 00:15:00.199 "nvme_iov_md": false 00:15:00.199 }, 00:15:00.199 "memory_domains": [ 00:15:00.199 { 00:15:00.199 "dma_device_id": "system", 00:15:00.199 "dma_device_type": 1 00:15:00.199 }, 00:15:00.199 { 00:15:00.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.199 "dma_device_type": 2 00:15:00.199 }, 00:15:00.199 { 00:15:00.199 "dma_device_id": "system", 00:15:00.199 "dma_device_type": 1 00:15:00.199 }, 00:15:00.199 { 00:15:00.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.199 "dma_device_type": 2 00:15:00.199 }, 00:15:00.199 { 00:15:00.199 "dma_device_id": "system", 00:15:00.199 "dma_device_type": 1 00:15:00.199 }, 00:15:00.199 { 00:15:00.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.199 "dma_device_type": 2 00:15:00.199 }, 00:15:00.199 { 00:15:00.199 "dma_device_id": "system", 00:15:00.199 "dma_device_type": 1 00:15:00.199 }, 00:15:00.199 { 00:15:00.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.199 "dma_device_type": 2 00:15:00.199 } 00:15:00.199 ], 00:15:00.199 "driver_specific": { 00:15:00.199 "raid": { 00:15:00.199 "uuid": "e9da50be-a61a-4652-a13e-d0fbfb00e70b", 00:15:00.199 "strip_size_kb": 64, 00:15:00.199 "state": "online", 00:15:00.199 "raid_level": "concat", 00:15:00.199 "superblock": false, 00:15:00.199 "num_base_bdevs": 4, 00:15:00.199 "num_base_bdevs_discovered": 4, 00:15:00.199 "num_base_bdevs_operational": 4, 00:15:00.199 "base_bdevs_list": [ 00:15:00.199 { 00:15:00.199 "name": "NewBaseBdev", 00:15:00.199 "uuid": "77cdf2b4-088f-42c9-b57c-28e56b3b853f", 00:15:00.199 "is_configured": true, 00:15:00.199 "data_offset": 0, 00:15:00.199 "data_size": 65536 00:15:00.199 }, 00:15:00.199 { 00:15:00.199 "name": "BaseBdev2", 00:15:00.199 "uuid": "eb96bc64-abd0-41b4-afe1-d409a8f56f44", 00:15:00.199 "is_configured": true, 00:15:00.199 "data_offset": 0, 00:15:00.199 "data_size": 65536 00:15:00.199 }, 00:15:00.199 { 00:15:00.199 "name": "BaseBdev3", 00:15:00.199 "uuid": "f10134a1-7def-4ff9-a494-e9f525fe97a1", 00:15:00.199 "is_configured": true, 00:15:00.199 "data_offset": 0, 00:15:00.199 "data_size": 65536 00:15:00.199 }, 00:15:00.199 { 00:15:00.199 "name": "BaseBdev4", 00:15:00.199 "uuid": "6e247a4f-5c99-4a93-8d0d-5f53ab83b4cc", 00:15:00.199 "is_configured": true, 00:15:00.199 "data_offset": 0, 00:15:00.199 "data_size": 65536 00:15:00.199 } 00:15:00.199 ] 00:15:00.199 } 00:15:00.199 } 00:15:00.199 }' 00:15:00.199 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:00.199 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:00.199 BaseBdev2 00:15:00.199 BaseBdev3 00:15:00.199 BaseBdev4' 00:15:00.200 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:00.200 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:00.200 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:00.458 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:00.458 "name": "NewBaseBdev", 00:15:00.458 "aliases": [ 00:15:00.458 "77cdf2b4-088f-42c9-b57c-28e56b3b853f" 00:15:00.458 ], 00:15:00.458 "product_name": "Malloc disk", 00:15:00.458 "block_size": 512, 00:15:00.458 "num_blocks": 65536, 00:15:00.458 "uuid": "77cdf2b4-088f-42c9-b57c-28e56b3b853f", 00:15:00.458 "assigned_rate_limits": { 00:15:00.458 "rw_ios_per_sec": 0, 00:15:00.458 "rw_mbytes_per_sec": 0, 00:15:00.458 "r_mbytes_per_sec": 0, 00:15:00.458 "w_mbytes_per_sec": 0 00:15:00.458 }, 00:15:00.458 "claimed": true, 00:15:00.458 "claim_type": "exclusive_write", 00:15:00.458 "zoned": false, 00:15:00.458 "supported_io_types": { 00:15:00.458 "read": true, 00:15:00.458 "write": true, 00:15:00.458 "unmap": true, 00:15:00.458 "flush": true, 00:15:00.458 "reset": true, 00:15:00.458 "nvme_admin": false, 00:15:00.458 "nvme_io": false, 00:15:00.458 "nvme_io_md": false, 00:15:00.458 "write_zeroes": true, 00:15:00.458 "zcopy": true, 00:15:00.458 "get_zone_info": false, 00:15:00.458 "zone_management": false, 00:15:00.458 "zone_append": false, 00:15:00.458 "compare": false, 00:15:00.458 "compare_and_write": false, 00:15:00.458 "abort": true, 00:15:00.458 "seek_hole": false, 00:15:00.458 "seek_data": false, 00:15:00.458 "copy": true, 00:15:00.458 "nvme_iov_md": false 00:15:00.458 }, 00:15:00.458 "memory_domains": [ 00:15:00.458 { 00:15:00.458 "dma_device_id": "system", 00:15:00.458 "dma_device_type": 1 00:15:00.458 }, 00:15:00.458 { 00:15:00.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.458 "dma_device_type": 2 00:15:00.458 } 00:15:00.458 ], 00:15:00.458 "driver_specific": {} 00:15:00.458 }' 00:15:00.458 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.458 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.458 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:00.458 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.458 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.716 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:00.716 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.716 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.716 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:00.716 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.716 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.717 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:00.717 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:00.717 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:00.717 06:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:00.975 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:00.975 "name": "BaseBdev2", 00:15:00.975 "aliases": [ 00:15:00.975 "eb96bc64-abd0-41b4-afe1-d409a8f56f44" 00:15:00.975 ], 00:15:00.975 "product_name": "Malloc disk", 00:15:00.975 "block_size": 512, 00:15:00.975 "num_blocks": 65536, 00:15:00.975 "uuid": "eb96bc64-abd0-41b4-afe1-d409a8f56f44", 00:15:00.975 "assigned_rate_limits": { 00:15:00.975 "rw_ios_per_sec": 0, 00:15:00.975 "rw_mbytes_per_sec": 0, 00:15:00.975 "r_mbytes_per_sec": 0, 00:15:00.975 "w_mbytes_per_sec": 0 00:15:00.975 }, 00:15:00.975 "claimed": true, 00:15:00.975 "claim_type": "exclusive_write", 00:15:00.975 "zoned": false, 00:15:00.975 "supported_io_types": { 00:15:00.975 "read": true, 00:15:00.975 "write": true, 00:15:00.975 "unmap": true, 00:15:00.975 "flush": true, 00:15:00.975 "reset": true, 00:15:00.975 "nvme_admin": false, 00:15:00.975 "nvme_io": false, 00:15:00.975 "nvme_io_md": false, 00:15:00.975 "write_zeroes": true, 00:15:00.975 "zcopy": true, 00:15:00.975 "get_zone_info": false, 00:15:00.975 "zone_management": false, 00:15:00.975 "zone_append": false, 00:15:00.975 "compare": false, 00:15:00.975 "compare_and_write": false, 00:15:00.975 "abort": true, 00:15:00.975 "seek_hole": false, 00:15:00.975 "seek_data": false, 00:15:00.975 "copy": true, 00:15:00.975 "nvme_iov_md": false 00:15:00.975 }, 00:15:00.975 "memory_domains": [ 00:15:00.975 { 00:15:00.975 "dma_device_id": "system", 00:15:00.975 "dma_device_type": 1 00:15:00.975 }, 00:15:00.975 { 00:15:00.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.975 "dma_device_type": 2 00:15:00.975 } 00:15:00.975 ], 00:15:00.975 "driver_specific": {} 00:15:00.975 }' 00:15:00.975 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.975 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.233 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:01.233 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.233 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.233 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:01.233 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.233 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.233 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:01.233 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.233 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.492 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:01.492 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:01.492 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:01.492 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:01.751 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:01.751 "name": "BaseBdev3", 00:15:01.751 "aliases": [ 00:15:01.751 "f10134a1-7def-4ff9-a494-e9f525fe97a1" 00:15:01.751 ], 00:15:01.751 "product_name": "Malloc disk", 00:15:01.751 "block_size": 512, 00:15:01.751 "num_blocks": 65536, 00:15:01.751 "uuid": "f10134a1-7def-4ff9-a494-e9f525fe97a1", 00:15:01.751 "assigned_rate_limits": { 00:15:01.751 "rw_ios_per_sec": 0, 00:15:01.751 "rw_mbytes_per_sec": 0, 00:15:01.751 "r_mbytes_per_sec": 0, 00:15:01.751 "w_mbytes_per_sec": 0 00:15:01.751 }, 00:15:01.751 "claimed": true, 00:15:01.751 "claim_type": "exclusive_write", 00:15:01.751 "zoned": false, 00:15:01.751 "supported_io_types": { 00:15:01.751 "read": true, 00:15:01.751 "write": true, 00:15:01.751 "unmap": true, 00:15:01.751 "flush": true, 00:15:01.751 "reset": true, 00:15:01.751 "nvme_admin": false, 00:15:01.751 "nvme_io": false, 00:15:01.751 "nvme_io_md": false, 00:15:01.751 "write_zeroes": true, 00:15:01.751 "zcopy": true, 00:15:01.751 "get_zone_info": false, 00:15:01.751 "zone_management": false, 00:15:01.751 "zone_append": false, 00:15:01.751 "compare": false, 00:15:01.751 "compare_and_write": false, 00:15:01.751 "abort": true, 00:15:01.751 "seek_hole": false, 00:15:01.751 "seek_data": false, 00:15:01.751 "copy": true, 00:15:01.751 "nvme_iov_md": false 00:15:01.751 }, 00:15:01.751 "memory_domains": [ 00:15:01.751 { 00:15:01.751 "dma_device_id": "system", 00:15:01.751 "dma_device_type": 1 00:15:01.751 }, 00:15:01.751 { 00:15:01.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.751 "dma_device_type": 2 00:15:01.751 } 00:15:01.751 ], 00:15:01.751 "driver_specific": {} 00:15:01.751 }' 00:15:01.751 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.751 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.751 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:01.751 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.751 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.751 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:01.751 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.751 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.751 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:01.751 06:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.011 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.011 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:02.011 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:02.011 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:02.011 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:02.273 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:02.273 "name": "BaseBdev4", 00:15:02.273 "aliases": [ 00:15:02.273 "6e247a4f-5c99-4a93-8d0d-5f53ab83b4cc" 00:15:02.273 ], 00:15:02.273 "product_name": "Malloc disk", 00:15:02.273 "block_size": 512, 00:15:02.273 "num_blocks": 65536, 00:15:02.273 "uuid": "6e247a4f-5c99-4a93-8d0d-5f53ab83b4cc", 00:15:02.273 "assigned_rate_limits": { 00:15:02.273 "rw_ios_per_sec": 0, 00:15:02.273 "rw_mbytes_per_sec": 0, 00:15:02.273 "r_mbytes_per_sec": 0, 00:15:02.273 "w_mbytes_per_sec": 0 00:15:02.273 }, 00:15:02.273 "claimed": true, 00:15:02.273 "claim_type": "exclusive_write", 00:15:02.273 "zoned": false, 00:15:02.273 "supported_io_types": { 00:15:02.273 "read": true, 00:15:02.273 "write": true, 00:15:02.273 "unmap": true, 00:15:02.273 "flush": true, 00:15:02.273 "reset": true, 00:15:02.273 "nvme_admin": false, 00:15:02.273 "nvme_io": false, 00:15:02.273 "nvme_io_md": false, 00:15:02.273 "write_zeroes": true, 00:15:02.273 "zcopy": true, 00:15:02.273 "get_zone_info": false, 00:15:02.273 "zone_management": false, 00:15:02.273 "zone_append": false, 00:15:02.273 "compare": false, 00:15:02.273 "compare_and_write": false, 00:15:02.273 "abort": true, 00:15:02.273 "seek_hole": false, 00:15:02.273 "seek_data": false, 00:15:02.273 "copy": true, 00:15:02.273 "nvme_iov_md": false 00:15:02.273 }, 00:15:02.273 "memory_domains": [ 00:15:02.273 { 00:15:02.273 "dma_device_id": "system", 00:15:02.273 "dma_device_type": 1 00:15:02.273 }, 00:15:02.273 { 00:15:02.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.273 "dma_device_type": 2 00:15:02.273 } 00:15:02.273 ], 00:15:02.273 "driver_specific": {} 00:15:02.273 }' 00:15:02.273 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.273 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.273 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:02.273 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.273 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.273 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:02.273 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.273 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.532 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:02.532 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.532 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.532 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:02.532 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:02.791 [2024-08-14 06:46:29.791604] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:02.791 [2024-08-14 06:46:29.791745] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.791 [2024-08-14 06:46:29.791917] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.791 [2024-08-14 06:46:29.792000] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.791 [2024-08-14 06:46:29.792025] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:15:02.791 06:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 86416 00:15:02.791 06:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 86416 ']' 00:15:02.791 06:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 86416 00:15:02.791 06:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:15:02.791 06:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:02.791 06:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86416 00:15:02.791 06:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:02.791 06:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:02.791 06:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86416' 00:15:02.791 killing process with pid 86416 00:15:02.791 06:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 86416 00:15:02.791 [2024-08-14 06:46:29.853381] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.791 06:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 86416 00:15:02.791 [2024-08-14 06:46:29.896946] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:03.049 06:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:03.049 00:15:03.049 real 0m27.940s 00:15:03.049 user 0m51.829s 00:15:03.049 sys 0m4.232s 00:15:03.049 ************************************ 00:15:03.049 END TEST raid_state_function_test 00:15:03.049 ************************************ 00:15:03.049 06:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:03.049 06:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.049 06:46:30 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:15:03.049 06:46:30 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:03.049 06:46:30 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:03.049 06:46:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:03.049 ************************************ 00:15:03.049 START TEST raid_state_function_test_sb 00:15:03.049 ************************************ 00:15:03.049 06:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 true 00:15:03.049 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:03.049 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:03.049 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=87420 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 87420' 00:15:03.050 Process raid pid: 87420 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 87420 /var/tmp/spdk-raid.sock 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 87420 ']' 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:03.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:03.050 06:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.309 [2024-08-14 06:46:30.312680] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:15:03.309 [2024-08-14 06:46:30.312800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.309 [2024-08-14 06:46:30.460302] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.309 [2024-08-14 06:46:30.511858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.309 [2024-08-14 06:46:30.557687] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.309 [2024-08-14 06:46:30.557724] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.244 06:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:04.244 06:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:15:04.244 06:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:04.244 [2024-08-14 06:46:31.327611] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:04.244 [2024-08-14 06:46:31.327776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:04.244 [2024-08-14 06:46:31.327797] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.244 [2024-08-14 06:46:31.327808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.244 [2024-08-14 06:46:31.327823] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:04.244 [2024-08-14 06:46:31.327833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:04.244 [2024-08-14 06:46:31.327846] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:04.244 [2024-08-14 06:46:31.327856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:04.244 06:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:04.244 06:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:04.244 06:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:04.244 06:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:04.244 06:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:04.244 06:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:04.244 06:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:04.244 06:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:04.244 06:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:04.244 06:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:04.244 06:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.244 06:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.503 06:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:04.503 "name": "Existed_Raid", 00:15:04.503 "uuid": "20c53761-c0ad-4677-a818-35c28dbed8d8", 00:15:04.503 "strip_size_kb": 64, 00:15:04.503 "state": "configuring", 00:15:04.503 "raid_level": "concat", 00:15:04.503 "superblock": true, 00:15:04.503 "num_base_bdevs": 4, 00:15:04.503 "num_base_bdevs_discovered": 0, 00:15:04.504 "num_base_bdevs_operational": 4, 00:15:04.504 "base_bdevs_list": [ 00:15:04.504 { 00:15:04.504 "name": "BaseBdev1", 00:15:04.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.504 "is_configured": false, 00:15:04.504 "data_offset": 0, 00:15:04.504 "data_size": 0 00:15:04.504 }, 00:15:04.504 { 00:15:04.504 "name": "BaseBdev2", 00:15:04.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.504 "is_configured": false, 00:15:04.504 "data_offset": 0, 00:15:04.504 "data_size": 0 00:15:04.504 }, 00:15:04.504 { 00:15:04.504 "name": "BaseBdev3", 00:15:04.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.504 "is_configured": false, 00:15:04.504 "data_offset": 0, 00:15:04.504 "data_size": 0 00:15:04.504 }, 00:15:04.504 { 00:15:04.504 "name": "BaseBdev4", 00:15:04.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.504 "is_configured": false, 00:15:04.504 "data_offset": 0, 00:15:04.504 "data_size": 0 00:15:04.504 } 00:15:04.504 ] 00:15:04.504 }' 00:15:04.504 06:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:04.504 06:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.082 06:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:05.349 [2024-08-14 06:46:32.345807] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.349 [2024-08-14 06:46:32.345964] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:05.349 06:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:05.349 [2024-08-14 06:46:32.533536] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:05.349 [2024-08-14 06:46:32.533693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:05.349 [2024-08-14 06:46:32.533732] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:05.349 [2024-08-14 06:46:32.533760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:05.349 [2024-08-14 06:46:32.533786] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:05.349 [2024-08-14 06:46:32.533813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:05.349 [2024-08-14 06:46:32.533838] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:05.349 [2024-08-14 06:46:32.533863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:05.349 06:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:05.606 [2024-08-14 06:46:32.735033] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.606 BaseBdev1 00:15:05.606 06:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:05.607 06:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:05.607 06:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:05.607 06:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:05.607 06:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:05.607 06:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:05.607 06:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:05.864 06:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:06.122 [ 00:15:06.122 { 00:15:06.122 "name": "BaseBdev1", 00:15:06.122 "aliases": [ 00:15:06.122 "90531957-8644-4f53-9163-b86a664ff54d" 00:15:06.122 ], 00:15:06.122 "product_name": "Malloc disk", 00:15:06.122 "block_size": 512, 00:15:06.122 "num_blocks": 65536, 00:15:06.122 "uuid": "90531957-8644-4f53-9163-b86a664ff54d", 00:15:06.122 "assigned_rate_limits": { 00:15:06.122 "rw_ios_per_sec": 0, 00:15:06.122 "rw_mbytes_per_sec": 0, 00:15:06.122 "r_mbytes_per_sec": 0, 00:15:06.122 "w_mbytes_per_sec": 0 00:15:06.122 }, 00:15:06.122 "claimed": true, 00:15:06.122 "claim_type": "exclusive_write", 00:15:06.122 "zoned": false, 00:15:06.122 "supported_io_types": { 00:15:06.122 "read": true, 00:15:06.122 "write": true, 00:15:06.122 "unmap": true, 00:15:06.122 "flush": true, 00:15:06.122 "reset": true, 00:15:06.122 "nvme_admin": false, 00:15:06.122 "nvme_io": false, 00:15:06.122 "nvme_io_md": false, 00:15:06.122 "write_zeroes": true, 00:15:06.122 "zcopy": true, 00:15:06.122 "get_zone_info": false, 00:15:06.122 "zone_management": false, 00:15:06.122 "zone_append": false, 00:15:06.122 "compare": false, 00:15:06.122 "compare_and_write": false, 00:15:06.122 "abort": true, 00:15:06.122 "seek_hole": false, 00:15:06.122 "seek_data": false, 00:15:06.122 "copy": true, 00:15:06.122 "nvme_iov_md": false 00:15:06.122 }, 00:15:06.122 "memory_domains": [ 00:15:06.122 { 00:15:06.122 "dma_device_id": "system", 00:15:06.122 "dma_device_type": 1 00:15:06.122 }, 00:15:06.122 { 00:15:06.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.122 "dma_device_type": 2 00:15:06.122 } 00:15:06.122 ], 00:15:06.122 "driver_specific": {} 00:15:06.122 } 00:15:06.122 ] 00:15:06.122 06:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:06.122 06:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:06.122 06:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:06.122 06:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:06.122 06:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:06.122 06:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:06.122 06:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:06.122 06:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:06.122 06:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:06.122 06:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:06.122 06:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:06.122 06:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.122 06:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.380 06:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:06.380 "name": "Existed_Raid", 00:15:06.380 "uuid": "2f8a1622-bc12-4f69-8a5a-a3af8d744c4e", 00:15:06.380 "strip_size_kb": 64, 00:15:06.380 "state": "configuring", 00:15:06.380 "raid_level": "concat", 00:15:06.380 "superblock": true, 00:15:06.380 "num_base_bdevs": 4, 00:15:06.380 "num_base_bdevs_discovered": 1, 00:15:06.380 "num_base_bdevs_operational": 4, 00:15:06.380 "base_bdevs_list": [ 00:15:06.380 { 00:15:06.380 "name": "BaseBdev1", 00:15:06.380 "uuid": "90531957-8644-4f53-9163-b86a664ff54d", 00:15:06.380 "is_configured": true, 00:15:06.380 "data_offset": 2048, 00:15:06.380 "data_size": 63488 00:15:06.380 }, 00:15:06.380 { 00:15:06.380 "name": "BaseBdev2", 00:15:06.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.380 "is_configured": false, 00:15:06.380 "data_offset": 0, 00:15:06.380 "data_size": 0 00:15:06.380 }, 00:15:06.380 { 00:15:06.380 "name": "BaseBdev3", 00:15:06.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.380 "is_configured": false, 00:15:06.380 "data_offset": 0, 00:15:06.380 "data_size": 0 00:15:06.380 }, 00:15:06.380 { 00:15:06.380 "name": "BaseBdev4", 00:15:06.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.380 "is_configured": false, 00:15:06.380 "data_offset": 0, 00:15:06.380 "data_size": 0 00:15:06.380 } 00:15:06.380 ] 00:15:06.380 }' 00:15:06.380 06:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:06.380 06:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.948 06:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:06.948 [2024-08-14 06:46:34.156798] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:06.948 [2024-08-14 06:46:34.156883] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:06.949 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:07.207 [2024-08-14 06:46:34.348549] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:07.207 [2024-08-14 06:46:34.350532] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:07.207 [2024-08-14 06:46:34.350647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:07.207 [2024-08-14 06:46:34.350674] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:07.207 [2024-08-14 06:46:34.350684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:07.207 [2024-08-14 06:46:34.350696] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:07.207 [2024-08-14 06:46:34.350705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:07.207 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:07.207 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:07.207 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:07.207 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:07.207 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:07.207 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:07.207 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.207 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:07.208 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.208 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.208 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.208 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.208 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.208 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.467 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.467 "name": "Existed_Raid", 00:15:07.467 "uuid": "1a3e48ae-090a-43a7-ac31-ef6fd2490e60", 00:15:07.467 "strip_size_kb": 64, 00:15:07.467 "state": "configuring", 00:15:07.467 "raid_level": "concat", 00:15:07.467 "superblock": true, 00:15:07.467 "num_base_bdevs": 4, 00:15:07.467 "num_base_bdevs_discovered": 1, 00:15:07.467 "num_base_bdevs_operational": 4, 00:15:07.467 "base_bdevs_list": [ 00:15:07.467 { 00:15:07.467 "name": "BaseBdev1", 00:15:07.467 "uuid": "90531957-8644-4f53-9163-b86a664ff54d", 00:15:07.467 "is_configured": true, 00:15:07.467 "data_offset": 2048, 00:15:07.467 "data_size": 63488 00:15:07.467 }, 00:15:07.467 { 00:15:07.467 "name": "BaseBdev2", 00:15:07.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.467 "is_configured": false, 00:15:07.467 "data_offset": 0, 00:15:07.467 "data_size": 0 00:15:07.467 }, 00:15:07.467 { 00:15:07.467 "name": "BaseBdev3", 00:15:07.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.467 "is_configured": false, 00:15:07.467 "data_offset": 0, 00:15:07.467 "data_size": 0 00:15:07.467 }, 00:15:07.467 { 00:15:07.467 "name": "BaseBdev4", 00:15:07.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.467 "is_configured": false, 00:15:07.467 "data_offset": 0, 00:15:07.467 "data_size": 0 00:15:07.467 } 00:15:07.467 ] 00:15:07.467 }' 00:15:07.467 06:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.467 06:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.034 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:08.291 [2024-08-14 06:46:35.317875] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.291 BaseBdev2 00:15:08.291 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:08.291 06:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:08.291 06:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:08.291 06:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:08.291 06:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:08.291 06:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:08.291 06:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:08.549 [ 00:15:08.549 { 00:15:08.549 "name": "BaseBdev2", 00:15:08.549 "aliases": [ 00:15:08.549 "b18272a6-753b-4803-beba-e60bbaf2b714" 00:15:08.549 ], 00:15:08.549 "product_name": "Malloc disk", 00:15:08.549 "block_size": 512, 00:15:08.549 "num_blocks": 65536, 00:15:08.549 "uuid": "b18272a6-753b-4803-beba-e60bbaf2b714", 00:15:08.549 "assigned_rate_limits": { 00:15:08.549 "rw_ios_per_sec": 0, 00:15:08.549 "rw_mbytes_per_sec": 0, 00:15:08.549 "r_mbytes_per_sec": 0, 00:15:08.549 "w_mbytes_per_sec": 0 00:15:08.549 }, 00:15:08.549 "claimed": true, 00:15:08.549 "claim_type": "exclusive_write", 00:15:08.549 "zoned": false, 00:15:08.549 "supported_io_types": { 00:15:08.549 "read": true, 00:15:08.549 "write": true, 00:15:08.549 "unmap": true, 00:15:08.549 "flush": true, 00:15:08.549 "reset": true, 00:15:08.549 "nvme_admin": false, 00:15:08.549 "nvme_io": false, 00:15:08.549 "nvme_io_md": false, 00:15:08.549 "write_zeroes": true, 00:15:08.549 "zcopy": true, 00:15:08.549 "get_zone_info": false, 00:15:08.549 "zone_management": false, 00:15:08.549 "zone_append": false, 00:15:08.549 "compare": false, 00:15:08.549 "compare_and_write": false, 00:15:08.549 "abort": true, 00:15:08.549 "seek_hole": false, 00:15:08.549 "seek_data": false, 00:15:08.549 "copy": true, 00:15:08.549 "nvme_iov_md": false 00:15:08.549 }, 00:15:08.549 "memory_domains": [ 00:15:08.549 { 00:15:08.549 "dma_device_id": "system", 00:15:08.549 "dma_device_type": 1 00:15:08.549 }, 00:15:08.549 { 00:15:08.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.549 "dma_device_type": 2 00:15:08.549 } 00:15:08.549 ], 00:15:08.549 "driver_specific": {} 00:15:08.549 } 00:15:08.549 ] 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.549 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.807 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:08.807 "name": "Existed_Raid", 00:15:08.807 "uuid": "1a3e48ae-090a-43a7-ac31-ef6fd2490e60", 00:15:08.807 "strip_size_kb": 64, 00:15:08.807 "state": "configuring", 00:15:08.807 "raid_level": "concat", 00:15:08.807 "superblock": true, 00:15:08.807 "num_base_bdevs": 4, 00:15:08.807 "num_base_bdevs_discovered": 2, 00:15:08.807 "num_base_bdevs_operational": 4, 00:15:08.807 "base_bdevs_list": [ 00:15:08.807 { 00:15:08.807 "name": "BaseBdev1", 00:15:08.807 "uuid": "90531957-8644-4f53-9163-b86a664ff54d", 00:15:08.807 "is_configured": true, 00:15:08.807 "data_offset": 2048, 00:15:08.807 "data_size": 63488 00:15:08.807 }, 00:15:08.807 { 00:15:08.807 "name": "BaseBdev2", 00:15:08.807 "uuid": "b18272a6-753b-4803-beba-e60bbaf2b714", 00:15:08.807 "is_configured": true, 00:15:08.807 "data_offset": 2048, 00:15:08.807 "data_size": 63488 00:15:08.807 }, 00:15:08.807 { 00:15:08.807 "name": "BaseBdev3", 00:15:08.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.807 "is_configured": false, 00:15:08.807 "data_offset": 0, 00:15:08.807 "data_size": 0 00:15:08.807 }, 00:15:08.807 { 00:15:08.807 "name": "BaseBdev4", 00:15:08.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.808 "is_configured": false, 00:15:08.808 "data_offset": 0, 00:15:08.808 "data_size": 0 00:15:08.808 } 00:15:08.808 ] 00:15:08.808 }' 00:15:08.808 06:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:08.808 06:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.373 06:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:09.630 [2024-08-14 06:46:36.743233] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.630 BaseBdev3 00:15:09.631 06:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:09.631 06:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:09.631 06:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:09.631 06:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:09.631 06:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:09.631 06:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:09.631 06:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:09.889 06:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:10.147 [ 00:15:10.147 { 00:15:10.147 "name": "BaseBdev3", 00:15:10.147 "aliases": [ 00:15:10.147 "6f4f5280-2a9f-4c45-9f53-1202770aaee0" 00:15:10.147 ], 00:15:10.147 "product_name": "Malloc disk", 00:15:10.147 "block_size": 512, 00:15:10.147 "num_blocks": 65536, 00:15:10.147 "uuid": "6f4f5280-2a9f-4c45-9f53-1202770aaee0", 00:15:10.147 "assigned_rate_limits": { 00:15:10.147 "rw_ios_per_sec": 0, 00:15:10.147 "rw_mbytes_per_sec": 0, 00:15:10.147 "r_mbytes_per_sec": 0, 00:15:10.147 "w_mbytes_per_sec": 0 00:15:10.147 }, 00:15:10.147 "claimed": true, 00:15:10.147 "claim_type": "exclusive_write", 00:15:10.147 "zoned": false, 00:15:10.147 "supported_io_types": { 00:15:10.147 "read": true, 00:15:10.147 "write": true, 00:15:10.147 "unmap": true, 00:15:10.147 "flush": true, 00:15:10.147 "reset": true, 00:15:10.147 "nvme_admin": false, 00:15:10.147 "nvme_io": false, 00:15:10.147 "nvme_io_md": false, 00:15:10.147 "write_zeroes": true, 00:15:10.147 "zcopy": true, 00:15:10.147 "get_zone_info": false, 00:15:10.147 "zone_management": false, 00:15:10.147 "zone_append": false, 00:15:10.147 "compare": false, 00:15:10.147 "compare_and_write": false, 00:15:10.147 "abort": true, 00:15:10.147 "seek_hole": false, 00:15:10.147 "seek_data": false, 00:15:10.147 "copy": true, 00:15:10.147 "nvme_iov_md": false 00:15:10.147 }, 00:15:10.147 "memory_domains": [ 00:15:10.147 { 00:15:10.147 "dma_device_id": "system", 00:15:10.147 "dma_device_type": 1 00:15:10.147 }, 00:15:10.147 { 00:15:10.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.147 "dma_device_type": 2 00:15:10.147 } 00:15:10.147 ], 00:15:10.147 "driver_specific": {} 00:15:10.147 } 00:15:10.147 ] 00:15:10.147 06:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:10.147 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:10.147 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:10.147 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:10.147 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:10.147 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:10.147 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:10.147 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:10.147 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:10.147 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:10.147 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:10.147 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:10.147 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:10.148 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.148 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.411 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:10.411 "name": "Existed_Raid", 00:15:10.411 "uuid": "1a3e48ae-090a-43a7-ac31-ef6fd2490e60", 00:15:10.411 "strip_size_kb": 64, 00:15:10.411 "state": "configuring", 00:15:10.411 "raid_level": "concat", 00:15:10.411 "superblock": true, 00:15:10.411 "num_base_bdevs": 4, 00:15:10.411 "num_base_bdevs_discovered": 3, 00:15:10.411 "num_base_bdevs_operational": 4, 00:15:10.411 "base_bdevs_list": [ 00:15:10.411 { 00:15:10.411 "name": "BaseBdev1", 00:15:10.411 "uuid": "90531957-8644-4f53-9163-b86a664ff54d", 00:15:10.411 "is_configured": true, 00:15:10.411 "data_offset": 2048, 00:15:10.411 "data_size": 63488 00:15:10.411 }, 00:15:10.411 { 00:15:10.411 "name": "BaseBdev2", 00:15:10.411 "uuid": "b18272a6-753b-4803-beba-e60bbaf2b714", 00:15:10.411 "is_configured": true, 00:15:10.411 "data_offset": 2048, 00:15:10.411 "data_size": 63488 00:15:10.411 }, 00:15:10.411 { 00:15:10.411 "name": "BaseBdev3", 00:15:10.411 "uuid": "6f4f5280-2a9f-4c45-9f53-1202770aaee0", 00:15:10.411 "is_configured": true, 00:15:10.411 "data_offset": 2048, 00:15:10.411 "data_size": 63488 00:15:10.411 }, 00:15:10.411 { 00:15:10.411 "name": "BaseBdev4", 00:15:10.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.411 "is_configured": false, 00:15:10.411 "data_offset": 0, 00:15:10.411 "data_size": 0 00:15:10.411 } 00:15:10.411 ] 00:15:10.411 }' 00:15:10.411 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:10.411 06:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.989 06:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:10.989 [2024-08-14 06:46:38.124765] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:10.989 [2024-08-14 06:46:38.125133] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:10.989 [2024-08-14 06:46:38.125219] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:10.989 [2024-08-14 06:46:38.125627] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:10.989 [2024-08-14 06:46:38.125851] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:10.989 [2024-08-14 06:46:38.125915] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:10.989 [2024-08-14 06:46:38.126113] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.989 BaseBdev4 00:15:10.989 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:10.989 06:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:15:10.989 06:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:10.989 06:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:10.989 06:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:10.989 06:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:10.989 06:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:11.246 06:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:11.505 [ 00:15:11.505 { 00:15:11.505 "name": "BaseBdev4", 00:15:11.505 "aliases": [ 00:15:11.505 "7dd6c9d1-dd78-4c55-92c7-04d3b2842c9c" 00:15:11.505 ], 00:15:11.505 "product_name": "Malloc disk", 00:15:11.505 "block_size": 512, 00:15:11.505 "num_blocks": 65536, 00:15:11.505 "uuid": "7dd6c9d1-dd78-4c55-92c7-04d3b2842c9c", 00:15:11.505 "assigned_rate_limits": { 00:15:11.505 "rw_ios_per_sec": 0, 00:15:11.505 "rw_mbytes_per_sec": 0, 00:15:11.505 "r_mbytes_per_sec": 0, 00:15:11.505 "w_mbytes_per_sec": 0 00:15:11.505 }, 00:15:11.505 "claimed": true, 00:15:11.505 "claim_type": "exclusive_write", 00:15:11.505 "zoned": false, 00:15:11.505 "supported_io_types": { 00:15:11.505 "read": true, 00:15:11.505 "write": true, 00:15:11.505 "unmap": true, 00:15:11.505 "flush": true, 00:15:11.505 "reset": true, 00:15:11.505 "nvme_admin": false, 00:15:11.505 "nvme_io": false, 00:15:11.505 "nvme_io_md": false, 00:15:11.505 "write_zeroes": true, 00:15:11.505 "zcopy": true, 00:15:11.505 "get_zone_info": false, 00:15:11.505 "zone_management": false, 00:15:11.505 "zone_append": false, 00:15:11.505 "compare": false, 00:15:11.505 "compare_and_write": false, 00:15:11.505 "abort": true, 00:15:11.505 "seek_hole": false, 00:15:11.505 "seek_data": false, 00:15:11.505 "copy": true, 00:15:11.505 "nvme_iov_md": false 00:15:11.505 }, 00:15:11.505 "memory_domains": [ 00:15:11.505 { 00:15:11.505 "dma_device_id": "system", 00:15:11.505 "dma_device_type": 1 00:15:11.505 }, 00:15:11.505 { 00:15:11.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.505 "dma_device_type": 2 00:15:11.505 } 00:15:11.505 ], 00:15:11.505 "driver_specific": {} 00:15:11.505 } 00:15:11.505 ] 00:15:11.505 06:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:11.505 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:11.505 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:11.505 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:11.505 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:11.505 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:11.505 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:11.505 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:11.505 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:11.505 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.505 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.505 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.505 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.505 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.505 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.764 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.764 "name": "Existed_Raid", 00:15:11.764 "uuid": "1a3e48ae-090a-43a7-ac31-ef6fd2490e60", 00:15:11.764 "strip_size_kb": 64, 00:15:11.764 "state": "online", 00:15:11.764 "raid_level": "concat", 00:15:11.764 "superblock": true, 00:15:11.764 "num_base_bdevs": 4, 00:15:11.764 "num_base_bdevs_discovered": 4, 00:15:11.764 "num_base_bdevs_operational": 4, 00:15:11.764 "base_bdevs_list": [ 00:15:11.764 { 00:15:11.764 "name": "BaseBdev1", 00:15:11.764 "uuid": "90531957-8644-4f53-9163-b86a664ff54d", 00:15:11.764 "is_configured": true, 00:15:11.764 "data_offset": 2048, 00:15:11.764 "data_size": 63488 00:15:11.764 }, 00:15:11.764 { 00:15:11.764 "name": "BaseBdev2", 00:15:11.764 "uuid": "b18272a6-753b-4803-beba-e60bbaf2b714", 00:15:11.764 "is_configured": true, 00:15:11.764 "data_offset": 2048, 00:15:11.764 "data_size": 63488 00:15:11.764 }, 00:15:11.764 { 00:15:11.764 "name": "BaseBdev3", 00:15:11.764 "uuid": "6f4f5280-2a9f-4c45-9f53-1202770aaee0", 00:15:11.764 "is_configured": true, 00:15:11.764 "data_offset": 2048, 00:15:11.764 "data_size": 63488 00:15:11.764 }, 00:15:11.764 { 00:15:11.764 "name": "BaseBdev4", 00:15:11.764 "uuid": "7dd6c9d1-dd78-4c55-92c7-04d3b2842c9c", 00:15:11.764 "is_configured": true, 00:15:11.764 "data_offset": 2048, 00:15:11.764 "data_size": 63488 00:15:11.764 } 00:15:11.764 ] 00:15:11.764 }' 00:15:11.764 06:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.764 06:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.332 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:12.332 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:12.332 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:12.332 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:12.332 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:12.332 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:12.332 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:12.332 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:12.332 [2024-08-14 06:46:39.534866] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.332 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:12.332 "name": "Existed_Raid", 00:15:12.332 "aliases": [ 00:15:12.332 "1a3e48ae-090a-43a7-ac31-ef6fd2490e60" 00:15:12.332 ], 00:15:12.332 "product_name": "Raid Volume", 00:15:12.332 "block_size": 512, 00:15:12.332 "num_blocks": 253952, 00:15:12.332 "uuid": "1a3e48ae-090a-43a7-ac31-ef6fd2490e60", 00:15:12.332 "assigned_rate_limits": { 00:15:12.332 "rw_ios_per_sec": 0, 00:15:12.332 "rw_mbytes_per_sec": 0, 00:15:12.332 "r_mbytes_per_sec": 0, 00:15:12.332 "w_mbytes_per_sec": 0 00:15:12.332 }, 00:15:12.332 "claimed": false, 00:15:12.332 "zoned": false, 00:15:12.332 "supported_io_types": { 00:15:12.332 "read": true, 00:15:12.332 "write": true, 00:15:12.332 "unmap": true, 00:15:12.332 "flush": true, 00:15:12.332 "reset": true, 00:15:12.332 "nvme_admin": false, 00:15:12.332 "nvme_io": false, 00:15:12.332 "nvme_io_md": false, 00:15:12.332 "write_zeroes": true, 00:15:12.332 "zcopy": false, 00:15:12.332 "get_zone_info": false, 00:15:12.332 "zone_management": false, 00:15:12.332 "zone_append": false, 00:15:12.332 "compare": false, 00:15:12.332 "compare_and_write": false, 00:15:12.332 "abort": false, 00:15:12.332 "seek_hole": false, 00:15:12.332 "seek_data": false, 00:15:12.332 "copy": false, 00:15:12.332 "nvme_iov_md": false 00:15:12.332 }, 00:15:12.332 "memory_domains": [ 00:15:12.332 { 00:15:12.332 "dma_device_id": "system", 00:15:12.332 "dma_device_type": 1 00:15:12.332 }, 00:15:12.332 { 00:15:12.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.332 "dma_device_type": 2 00:15:12.332 }, 00:15:12.332 { 00:15:12.332 "dma_device_id": "system", 00:15:12.332 "dma_device_type": 1 00:15:12.332 }, 00:15:12.332 { 00:15:12.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.332 "dma_device_type": 2 00:15:12.332 }, 00:15:12.332 { 00:15:12.332 "dma_device_id": "system", 00:15:12.332 "dma_device_type": 1 00:15:12.332 }, 00:15:12.332 { 00:15:12.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.332 "dma_device_type": 2 00:15:12.332 }, 00:15:12.332 { 00:15:12.332 "dma_device_id": "system", 00:15:12.332 "dma_device_type": 1 00:15:12.332 }, 00:15:12.332 { 00:15:12.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.333 "dma_device_type": 2 00:15:12.333 } 00:15:12.333 ], 00:15:12.333 "driver_specific": { 00:15:12.333 "raid": { 00:15:12.333 "uuid": "1a3e48ae-090a-43a7-ac31-ef6fd2490e60", 00:15:12.333 "strip_size_kb": 64, 00:15:12.333 "state": "online", 00:15:12.333 "raid_level": "concat", 00:15:12.333 "superblock": true, 00:15:12.333 "num_base_bdevs": 4, 00:15:12.333 "num_base_bdevs_discovered": 4, 00:15:12.333 "num_base_bdevs_operational": 4, 00:15:12.333 "base_bdevs_list": [ 00:15:12.333 { 00:15:12.333 "name": "BaseBdev1", 00:15:12.333 "uuid": "90531957-8644-4f53-9163-b86a664ff54d", 00:15:12.333 "is_configured": true, 00:15:12.333 "data_offset": 2048, 00:15:12.333 "data_size": 63488 00:15:12.333 }, 00:15:12.333 { 00:15:12.333 "name": "BaseBdev2", 00:15:12.333 "uuid": "b18272a6-753b-4803-beba-e60bbaf2b714", 00:15:12.333 "is_configured": true, 00:15:12.333 "data_offset": 2048, 00:15:12.333 "data_size": 63488 00:15:12.333 }, 00:15:12.333 { 00:15:12.333 "name": "BaseBdev3", 00:15:12.333 "uuid": "6f4f5280-2a9f-4c45-9f53-1202770aaee0", 00:15:12.333 "is_configured": true, 00:15:12.333 "data_offset": 2048, 00:15:12.333 "data_size": 63488 00:15:12.333 }, 00:15:12.333 { 00:15:12.333 "name": "BaseBdev4", 00:15:12.333 "uuid": "7dd6c9d1-dd78-4c55-92c7-04d3b2842c9c", 00:15:12.333 "is_configured": true, 00:15:12.333 "data_offset": 2048, 00:15:12.333 "data_size": 63488 00:15:12.333 } 00:15:12.333 ] 00:15:12.333 } 00:15:12.333 } 00:15:12.333 }' 00:15:12.333 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:12.592 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:12.592 BaseBdev2 00:15:12.592 BaseBdev3 00:15:12.592 BaseBdev4' 00:15:12.592 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:12.592 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:12.592 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:12.592 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:12.592 "name": "BaseBdev1", 00:15:12.592 "aliases": [ 00:15:12.592 "90531957-8644-4f53-9163-b86a664ff54d" 00:15:12.592 ], 00:15:12.592 "product_name": "Malloc disk", 00:15:12.592 "block_size": 512, 00:15:12.592 "num_blocks": 65536, 00:15:12.592 "uuid": "90531957-8644-4f53-9163-b86a664ff54d", 00:15:12.592 "assigned_rate_limits": { 00:15:12.592 "rw_ios_per_sec": 0, 00:15:12.592 "rw_mbytes_per_sec": 0, 00:15:12.592 "r_mbytes_per_sec": 0, 00:15:12.592 "w_mbytes_per_sec": 0 00:15:12.592 }, 00:15:12.592 "claimed": true, 00:15:12.592 "claim_type": "exclusive_write", 00:15:12.592 "zoned": false, 00:15:12.592 "supported_io_types": { 00:15:12.592 "read": true, 00:15:12.592 "write": true, 00:15:12.592 "unmap": true, 00:15:12.592 "flush": true, 00:15:12.592 "reset": true, 00:15:12.592 "nvme_admin": false, 00:15:12.592 "nvme_io": false, 00:15:12.592 "nvme_io_md": false, 00:15:12.592 "write_zeroes": true, 00:15:12.592 "zcopy": true, 00:15:12.592 "get_zone_info": false, 00:15:12.592 "zone_management": false, 00:15:12.592 "zone_append": false, 00:15:12.592 "compare": false, 00:15:12.592 "compare_and_write": false, 00:15:12.592 "abort": true, 00:15:12.592 "seek_hole": false, 00:15:12.592 "seek_data": false, 00:15:12.592 "copy": true, 00:15:12.592 "nvme_iov_md": false 00:15:12.592 }, 00:15:12.592 "memory_domains": [ 00:15:12.592 { 00:15:12.592 "dma_device_id": "system", 00:15:12.592 "dma_device_type": 1 00:15:12.592 }, 00:15:12.592 { 00:15:12.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.592 "dma_device_type": 2 00:15:12.592 } 00:15:12.592 ], 00:15:12.592 "driver_specific": {} 00:15:12.592 }' 00:15:12.592 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.851 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.851 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:12.851 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.851 06:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.851 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:12.851 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.851 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:13.111 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:13.111 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:13.111 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:13.111 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:13.111 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:13.111 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:13.111 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:13.370 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:13.370 "name": "BaseBdev2", 00:15:13.370 "aliases": [ 00:15:13.370 "b18272a6-753b-4803-beba-e60bbaf2b714" 00:15:13.370 ], 00:15:13.370 "product_name": "Malloc disk", 00:15:13.370 "block_size": 512, 00:15:13.370 "num_blocks": 65536, 00:15:13.370 "uuid": "b18272a6-753b-4803-beba-e60bbaf2b714", 00:15:13.370 "assigned_rate_limits": { 00:15:13.370 "rw_ios_per_sec": 0, 00:15:13.370 "rw_mbytes_per_sec": 0, 00:15:13.370 "r_mbytes_per_sec": 0, 00:15:13.370 "w_mbytes_per_sec": 0 00:15:13.370 }, 00:15:13.370 "claimed": true, 00:15:13.370 "claim_type": "exclusive_write", 00:15:13.370 "zoned": false, 00:15:13.370 "supported_io_types": { 00:15:13.370 "read": true, 00:15:13.370 "write": true, 00:15:13.370 "unmap": true, 00:15:13.370 "flush": true, 00:15:13.370 "reset": true, 00:15:13.370 "nvme_admin": false, 00:15:13.370 "nvme_io": false, 00:15:13.370 "nvme_io_md": false, 00:15:13.370 "write_zeroes": true, 00:15:13.370 "zcopy": true, 00:15:13.370 "get_zone_info": false, 00:15:13.370 "zone_management": false, 00:15:13.370 "zone_append": false, 00:15:13.370 "compare": false, 00:15:13.370 "compare_and_write": false, 00:15:13.370 "abort": true, 00:15:13.370 "seek_hole": false, 00:15:13.370 "seek_data": false, 00:15:13.370 "copy": true, 00:15:13.370 "nvme_iov_md": false 00:15:13.370 }, 00:15:13.370 "memory_domains": [ 00:15:13.370 { 00:15:13.370 "dma_device_id": "system", 00:15:13.370 "dma_device_type": 1 00:15:13.370 }, 00:15:13.370 { 00:15:13.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.370 "dma_device_type": 2 00:15:13.370 } 00:15:13.370 ], 00:15:13.370 "driver_specific": {} 00:15:13.370 }' 00:15:13.370 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.370 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.370 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:13.370 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.370 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.370 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:13.370 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:13.629 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:13.629 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:13.629 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:13.629 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:13.629 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:13.629 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:13.629 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:13.629 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:13.889 06:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:13.889 "name": "BaseBdev3", 00:15:13.889 "aliases": [ 00:15:13.889 "6f4f5280-2a9f-4c45-9f53-1202770aaee0" 00:15:13.889 ], 00:15:13.889 "product_name": "Malloc disk", 00:15:13.889 "block_size": 512, 00:15:13.889 "num_blocks": 65536, 00:15:13.889 "uuid": "6f4f5280-2a9f-4c45-9f53-1202770aaee0", 00:15:13.889 "assigned_rate_limits": { 00:15:13.889 "rw_ios_per_sec": 0, 00:15:13.889 "rw_mbytes_per_sec": 0, 00:15:13.889 "r_mbytes_per_sec": 0, 00:15:13.889 "w_mbytes_per_sec": 0 00:15:13.889 }, 00:15:13.889 "claimed": true, 00:15:13.889 "claim_type": "exclusive_write", 00:15:13.889 "zoned": false, 00:15:13.889 "supported_io_types": { 00:15:13.889 "read": true, 00:15:13.889 "write": true, 00:15:13.889 "unmap": true, 00:15:13.889 "flush": true, 00:15:13.889 "reset": true, 00:15:13.889 "nvme_admin": false, 00:15:13.889 "nvme_io": false, 00:15:13.889 "nvme_io_md": false, 00:15:13.889 "write_zeroes": true, 00:15:13.889 "zcopy": true, 00:15:13.889 "get_zone_info": false, 00:15:13.889 "zone_management": false, 00:15:13.889 "zone_append": false, 00:15:13.889 "compare": false, 00:15:13.889 "compare_and_write": false, 00:15:13.889 "abort": true, 00:15:13.889 "seek_hole": false, 00:15:13.889 "seek_data": false, 00:15:13.889 "copy": true, 00:15:13.889 "nvme_iov_md": false 00:15:13.889 }, 00:15:13.889 "memory_domains": [ 00:15:13.889 { 00:15:13.889 "dma_device_id": "system", 00:15:13.889 "dma_device_type": 1 00:15:13.889 }, 00:15:13.889 { 00:15:13.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.889 "dma_device_type": 2 00:15:13.889 } 00:15:13.889 ], 00:15:13.889 "driver_specific": {} 00:15:13.889 }' 00:15:13.889 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.889 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.889 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:13.889 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.889 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:14.148 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:14.148 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.148 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.148 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:14.148 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.148 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.148 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:14.148 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:14.148 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:14.148 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:14.407 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:14.407 "name": "BaseBdev4", 00:15:14.407 "aliases": [ 00:15:14.407 "7dd6c9d1-dd78-4c55-92c7-04d3b2842c9c" 00:15:14.407 ], 00:15:14.407 "product_name": "Malloc disk", 00:15:14.407 "block_size": 512, 00:15:14.407 "num_blocks": 65536, 00:15:14.407 "uuid": "7dd6c9d1-dd78-4c55-92c7-04d3b2842c9c", 00:15:14.407 "assigned_rate_limits": { 00:15:14.407 "rw_ios_per_sec": 0, 00:15:14.407 "rw_mbytes_per_sec": 0, 00:15:14.407 "r_mbytes_per_sec": 0, 00:15:14.407 "w_mbytes_per_sec": 0 00:15:14.407 }, 00:15:14.407 "claimed": true, 00:15:14.408 "claim_type": "exclusive_write", 00:15:14.408 "zoned": false, 00:15:14.408 "supported_io_types": { 00:15:14.408 "read": true, 00:15:14.408 "write": true, 00:15:14.408 "unmap": true, 00:15:14.408 "flush": true, 00:15:14.408 "reset": true, 00:15:14.408 "nvme_admin": false, 00:15:14.408 "nvme_io": false, 00:15:14.408 "nvme_io_md": false, 00:15:14.408 "write_zeroes": true, 00:15:14.408 "zcopy": true, 00:15:14.408 "get_zone_info": false, 00:15:14.408 "zone_management": false, 00:15:14.408 "zone_append": false, 00:15:14.408 "compare": false, 00:15:14.408 "compare_and_write": false, 00:15:14.408 "abort": true, 00:15:14.408 "seek_hole": false, 00:15:14.408 "seek_data": false, 00:15:14.408 "copy": true, 00:15:14.408 "nvme_iov_md": false 00:15:14.408 }, 00:15:14.408 "memory_domains": [ 00:15:14.408 { 00:15:14.408 "dma_device_id": "system", 00:15:14.408 "dma_device_type": 1 00:15:14.408 }, 00:15:14.408 { 00:15:14.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.408 "dma_device_type": 2 00:15:14.408 } 00:15:14.408 ], 00:15:14.408 "driver_specific": {} 00:15:14.408 }' 00:15:14.408 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:14.408 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:14.408 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:14.408 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:14.666 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:14.666 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:14.666 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.666 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.666 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:14.666 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.666 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.666 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:14.666 06:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:14.925 [2024-08-14 06:46:42.082433] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:14.925 [2024-08-14 06:46:42.082476] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.925 [2024-08-14 06:46:42.082543] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.925 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:14.925 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:14.925 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:14.925 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:14.925 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:14.925 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:14.925 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:14.925 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:14.925 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:14.925 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.925 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:14.925 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.926 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.926 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.926 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.926 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.926 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.185 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:15.185 "name": "Existed_Raid", 00:15:15.185 "uuid": "1a3e48ae-090a-43a7-ac31-ef6fd2490e60", 00:15:15.185 "strip_size_kb": 64, 00:15:15.185 "state": "offline", 00:15:15.185 "raid_level": "concat", 00:15:15.185 "superblock": true, 00:15:15.185 "num_base_bdevs": 4, 00:15:15.185 "num_base_bdevs_discovered": 3, 00:15:15.185 "num_base_bdevs_operational": 3, 00:15:15.185 "base_bdevs_list": [ 00:15:15.185 { 00:15:15.185 "name": null, 00:15:15.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.185 "is_configured": false, 00:15:15.185 "data_offset": 2048, 00:15:15.185 "data_size": 63488 00:15:15.185 }, 00:15:15.185 { 00:15:15.185 "name": "BaseBdev2", 00:15:15.185 "uuid": "b18272a6-753b-4803-beba-e60bbaf2b714", 00:15:15.185 "is_configured": true, 00:15:15.185 "data_offset": 2048, 00:15:15.185 "data_size": 63488 00:15:15.185 }, 00:15:15.185 { 00:15:15.185 "name": "BaseBdev3", 00:15:15.185 "uuid": "6f4f5280-2a9f-4c45-9f53-1202770aaee0", 00:15:15.185 "is_configured": true, 00:15:15.185 "data_offset": 2048, 00:15:15.185 "data_size": 63488 00:15:15.185 }, 00:15:15.185 { 00:15:15.185 "name": "BaseBdev4", 00:15:15.185 "uuid": "7dd6c9d1-dd78-4c55-92c7-04d3b2842c9c", 00:15:15.185 "is_configured": true, 00:15:15.185 "data_offset": 2048, 00:15:15.185 "data_size": 63488 00:15:15.185 } 00:15:15.185 ] 00:15:15.185 }' 00:15:15.185 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:15.185 06:46:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.752 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:15.752 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:15.753 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:15.753 06:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.011 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:16.011 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:16.011 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:16.271 [2024-08-14 06:46:43.272121] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:16.271 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:16.271 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:16.271 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.271 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:16.271 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:16.271 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:16.271 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:16.531 [2024-08-14 06:46:43.690898] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:16.531 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:16.531 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:16.531 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.531 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:16.790 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:16.790 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:16.790 06:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:17.048 [2024-08-14 06:46:44.113863] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:17.048 [2024-08-14 06:46:44.113934] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:17.048 06:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:17.048 06:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:17.048 06:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.048 06:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:17.307 06:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:17.307 06:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:17.307 06:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:17.307 06:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:17.307 06:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:17.307 06:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:17.307 BaseBdev2 00:15:17.566 06:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:17.566 06:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:17.566 06:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:17.566 06:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:17.566 06:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:17.566 06:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:17.566 06:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:17.566 06:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:17.825 [ 00:15:17.825 { 00:15:17.825 "name": "BaseBdev2", 00:15:17.825 "aliases": [ 00:15:17.825 "a0edb929-71a7-423e-8576-6688eb5c13d1" 00:15:17.825 ], 00:15:17.825 "product_name": "Malloc disk", 00:15:17.825 "block_size": 512, 00:15:17.825 "num_blocks": 65536, 00:15:17.825 "uuid": "a0edb929-71a7-423e-8576-6688eb5c13d1", 00:15:17.825 "assigned_rate_limits": { 00:15:17.825 "rw_ios_per_sec": 0, 00:15:17.825 "rw_mbytes_per_sec": 0, 00:15:17.825 "r_mbytes_per_sec": 0, 00:15:17.825 "w_mbytes_per_sec": 0 00:15:17.825 }, 00:15:17.825 "claimed": false, 00:15:17.825 "zoned": false, 00:15:17.825 "supported_io_types": { 00:15:17.825 "read": true, 00:15:17.825 "write": true, 00:15:17.825 "unmap": true, 00:15:17.825 "flush": true, 00:15:17.825 "reset": true, 00:15:17.825 "nvme_admin": false, 00:15:17.825 "nvme_io": false, 00:15:17.825 "nvme_io_md": false, 00:15:17.825 "write_zeroes": true, 00:15:17.825 "zcopy": true, 00:15:17.825 "get_zone_info": false, 00:15:17.825 "zone_management": false, 00:15:17.825 "zone_append": false, 00:15:17.825 "compare": false, 00:15:17.825 "compare_and_write": false, 00:15:17.825 "abort": true, 00:15:17.825 "seek_hole": false, 00:15:17.825 "seek_data": false, 00:15:17.825 "copy": true, 00:15:17.825 "nvme_iov_md": false 00:15:17.825 }, 00:15:17.825 "memory_domains": [ 00:15:17.825 { 00:15:17.825 "dma_device_id": "system", 00:15:17.825 "dma_device_type": 1 00:15:17.825 }, 00:15:17.825 { 00:15:17.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.825 "dma_device_type": 2 00:15:17.825 } 00:15:17.825 ], 00:15:17.825 "driver_specific": {} 00:15:17.825 } 00:15:17.825 ] 00:15:17.825 06:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:17.825 06:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:17.825 06:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:17.825 06:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:18.084 BaseBdev3 00:15:18.084 06:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:18.084 06:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:18.084 06:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:18.084 06:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:18.084 06:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:18.084 06:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:18.084 06:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:18.342 06:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:18.342 [ 00:15:18.342 { 00:15:18.342 "name": "BaseBdev3", 00:15:18.342 "aliases": [ 00:15:18.342 "d5a8eb83-3ea0-4f8a-ad07-bfab5556ac05" 00:15:18.342 ], 00:15:18.342 "product_name": "Malloc disk", 00:15:18.342 "block_size": 512, 00:15:18.342 "num_blocks": 65536, 00:15:18.342 "uuid": "d5a8eb83-3ea0-4f8a-ad07-bfab5556ac05", 00:15:18.342 "assigned_rate_limits": { 00:15:18.342 "rw_ios_per_sec": 0, 00:15:18.342 "rw_mbytes_per_sec": 0, 00:15:18.342 "r_mbytes_per_sec": 0, 00:15:18.342 "w_mbytes_per_sec": 0 00:15:18.342 }, 00:15:18.343 "claimed": false, 00:15:18.343 "zoned": false, 00:15:18.343 "supported_io_types": { 00:15:18.343 "read": true, 00:15:18.343 "write": true, 00:15:18.343 "unmap": true, 00:15:18.343 "flush": true, 00:15:18.343 "reset": true, 00:15:18.343 "nvme_admin": false, 00:15:18.343 "nvme_io": false, 00:15:18.343 "nvme_io_md": false, 00:15:18.343 "write_zeroes": true, 00:15:18.343 "zcopy": true, 00:15:18.343 "get_zone_info": false, 00:15:18.343 "zone_management": false, 00:15:18.343 "zone_append": false, 00:15:18.343 "compare": false, 00:15:18.343 "compare_and_write": false, 00:15:18.343 "abort": true, 00:15:18.343 "seek_hole": false, 00:15:18.343 "seek_data": false, 00:15:18.343 "copy": true, 00:15:18.343 "nvme_iov_md": false 00:15:18.343 }, 00:15:18.343 "memory_domains": [ 00:15:18.343 { 00:15:18.343 "dma_device_id": "system", 00:15:18.343 "dma_device_type": 1 00:15:18.343 }, 00:15:18.343 { 00:15:18.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.343 "dma_device_type": 2 00:15:18.343 } 00:15:18.343 ], 00:15:18.343 "driver_specific": {} 00:15:18.343 } 00:15:18.343 ] 00:15:18.343 06:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:18.343 06:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:18.343 06:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:18.343 06:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:18.602 BaseBdev4 00:15:18.602 06:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:18.602 06:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:15:18.602 06:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:18.602 06:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:18.602 06:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:18.602 06:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:18.602 06:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:18.861 06:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:19.120 [ 00:15:19.120 { 00:15:19.120 "name": "BaseBdev4", 00:15:19.120 "aliases": [ 00:15:19.120 "03507309-eb47-4688-9e8b-1063998153f8" 00:15:19.120 ], 00:15:19.120 "product_name": "Malloc disk", 00:15:19.120 "block_size": 512, 00:15:19.120 "num_blocks": 65536, 00:15:19.120 "uuid": "03507309-eb47-4688-9e8b-1063998153f8", 00:15:19.120 "assigned_rate_limits": { 00:15:19.120 "rw_ios_per_sec": 0, 00:15:19.120 "rw_mbytes_per_sec": 0, 00:15:19.120 "r_mbytes_per_sec": 0, 00:15:19.120 "w_mbytes_per_sec": 0 00:15:19.120 }, 00:15:19.120 "claimed": false, 00:15:19.120 "zoned": false, 00:15:19.120 "supported_io_types": { 00:15:19.120 "read": true, 00:15:19.120 "write": true, 00:15:19.120 "unmap": true, 00:15:19.120 "flush": true, 00:15:19.120 "reset": true, 00:15:19.120 "nvme_admin": false, 00:15:19.120 "nvme_io": false, 00:15:19.120 "nvme_io_md": false, 00:15:19.120 "write_zeroes": true, 00:15:19.120 "zcopy": true, 00:15:19.120 "get_zone_info": false, 00:15:19.120 "zone_management": false, 00:15:19.120 "zone_append": false, 00:15:19.120 "compare": false, 00:15:19.120 "compare_and_write": false, 00:15:19.120 "abort": true, 00:15:19.120 "seek_hole": false, 00:15:19.120 "seek_data": false, 00:15:19.120 "copy": true, 00:15:19.120 "nvme_iov_md": false 00:15:19.120 }, 00:15:19.120 "memory_domains": [ 00:15:19.120 { 00:15:19.120 "dma_device_id": "system", 00:15:19.120 "dma_device_type": 1 00:15:19.120 }, 00:15:19.120 { 00:15:19.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.120 "dma_device_type": 2 00:15:19.120 } 00:15:19.120 ], 00:15:19.120 "driver_specific": {} 00:15:19.120 } 00:15:19.120 ] 00:15:19.120 06:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:19.120 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:19.120 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:19.120 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:19.120 [2024-08-14 06:46:46.365399] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.120 [2024-08-14 06:46:46.365474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.120 [2024-08-14 06:46:46.365521] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.120 [2024-08-14 06:46:46.367517] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.120 [2024-08-14 06:46:46.367657] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:19.379 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:19.379 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:19.379 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:19.379 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:19.379 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:19.379 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:19.379 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:19.379 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:19.379 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:19.379 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:19.379 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.379 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.379 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.379 "name": "Existed_Raid", 00:15:19.379 "uuid": "cfea8673-37d3-4122-ae22-e8bdfde8dd3a", 00:15:19.379 "strip_size_kb": 64, 00:15:19.379 "state": "configuring", 00:15:19.379 "raid_level": "concat", 00:15:19.379 "superblock": true, 00:15:19.379 "num_base_bdevs": 4, 00:15:19.379 "num_base_bdevs_discovered": 3, 00:15:19.379 "num_base_bdevs_operational": 4, 00:15:19.379 "base_bdevs_list": [ 00:15:19.379 { 00:15:19.379 "name": "BaseBdev1", 00:15:19.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.379 "is_configured": false, 00:15:19.379 "data_offset": 0, 00:15:19.379 "data_size": 0 00:15:19.379 }, 00:15:19.379 { 00:15:19.379 "name": "BaseBdev2", 00:15:19.379 "uuid": "a0edb929-71a7-423e-8576-6688eb5c13d1", 00:15:19.379 "is_configured": true, 00:15:19.379 "data_offset": 2048, 00:15:19.379 "data_size": 63488 00:15:19.379 }, 00:15:19.379 { 00:15:19.379 "name": "BaseBdev3", 00:15:19.380 "uuid": "d5a8eb83-3ea0-4f8a-ad07-bfab5556ac05", 00:15:19.380 "is_configured": true, 00:15:19.380 "data_offset": 2048, 00:15:19.380 "data_size": 63488 00:15:19.380 }, 00:15:19.380 { 00:15:19.380 "name": "BaseBdev4", 00:15:19.380 "uuid": "03507309-eb47-4688-9e8b-1063998153f8", 00:15:19.380 "is_configured": true, 00:15:19.380 "data_offset": 2048, 00:15:19.380 "data_size": 63488 00:15:19.380 } 00:15:19.380 ] 00:15:19.380 }' 00:15:19.380 06:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.380 06:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.947 06:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:20.205 [2024-08-14 06:46:47.355681] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:20.205 06:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:20.205 06:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:20.206 06:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:20.206 06:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:20.206 06:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.206 06:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:20.206 06:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.206 06:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.206 06:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.206 06:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.206 06:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.206 06:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.464 06:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.465 "name": "Existed_Raid", 00:15:20.465 "uuid": "cfea8673-37d3-4122-ae22-e8bdfde8dd3a", 00:15:20.465 "strip_size_kb": 64, 00:15:20.465 "state": "configuring", 00:15:20.465 "raid_level": "concat", 00:15:20.465 "superblock": true, 00:15:20.465 "num_base_bdevs": 4, 00:15:20.465 "num_base_bdevs_discovered": 2, 00:15:20.465 "num_base_bdevs_operational": 4, 00:15:20.465 "base_bdevs_list": [ 00:15:20.465 { 00:15:20.465 "name": "BaseBdev1", 00:15:20.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.465 "is_configured": false, 00:15:20.465 "data_offset": 0, 00:15:20.465 "data_size": 0 00:15:20.465 }, 00:15:20.465 { 00:15:20.465 "name": null, 00:15:20.465 "uuid": "a0edb929-71a7-423e-8576-6688eb5c13d1", 00:15:20.465 "is_configured": false, 00:15:20.465 "data_offset": 2048, 00:15:20.465 "data_size": 63488 00:15:20.465 }, 00:15:20.465 { 00:15:20.465 "name": "BaseBdev3", 00:15:20.465 "uuid": "d5a8eb83-3ea0-4f8a-ad07-bfab5556ac05", 00:15:20.465 "is_configured": true, 00:15:20.465 "data_offset": 2048, 00:15:20.465 "data_size": 63488 00:15:20.465 }, 00:15:20.465 { 00:15:20.465 "name": "BaseBdev4", 00:15:20.465 "uuid": "03507309-eb47-4688-9e8b-1063998153f8", 00:15:20.465 "is_configured": true, 00:15:20.465 "data_offset": 2048, 00:15:20.465 "data_size": 63488 00:15:20.465 } 00:15:20.465 ] 00:15:20.465 }' 00:15:20.465 06:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.465 06:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.067 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.067 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:21.345 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:21.345 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:21.345 [2024-08-14 06:46:48.529033] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.345 BaseBdev1 00:15:21.345 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:21.345 06:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:21.345 06:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:21.345 06:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:21.345 06:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:21.345 06:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:21.345 06:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:21.604 06:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:21.863 [ 00:15:21.863 { 00:15:21.863 "name": "BaseBdev1", 00:15:21.863 "aliases": [ 00:15:21.863 "614a3f69-b897-4181-b272-279d12f164a0" 00:15:21.863 ], 00:15:21.863 "product_name": "Malloc disk", 00:15:21.863 "block_size": 512, 00:15:21.863 "num_blocks": 65536, 00:15:21.863 "uuid": "614a3f69-b897-4181-b272-279d12f164a0", 00:15:21.863 "assigned_rate_limits": { 00:15:21.863 "rw_ios_per_sec": 0, 00:15:21.863 "rw_mbytes_per_sec": 0, 00:15:21.863 "r_mbytes_per_sec": 0, 00:15:21.863 "w_mbytes_per_sec": 0 00:15:21.863 }, 00:15:21.863 "claimed": true, 00:15:21.863 "claim_type": "exclusive_write", 00:15:21.863 "zoned": false, 00:15:21.863 "supported_io_types": { 00:15:21.863 "read": true, 00:15:21.863 "write": true, 00:15:21.863 "unmap": true, 00:15:21.863 "flush": true, 00:15:21.863 "reset": true, 00:15:21.863 "nvme_admin": false, 00:15:21.863 "nvme_io": false, 00:15:21.863 "nvme_io_md": false, 00:15:21.863 "write_zeroes": true, 00:15:21.863 "zcopy": true, 00:15:21.863 "get_zone_info": false, 00:15:21.863 "zone_management": false, 00:15:21.863 "zone_append": false, 00:15:21.863 "compare": false, 00:15:21.863 "compare_and_write": false, 00:15:21.863 "abort": true, 00:15:21.863 "seek_hole": false, 00:15:21.863 "seek_data": false, 00:15:21.863 "copy": true, 00:15:21.863 "nvme_iov_md": false 00:15:21.863 }, 00:15:21.863 "memory_domains": [ 00:15:21.863 { 00:15:21.863 "dma_device_id": "system", 00:15:21.863 "dma_device_type": 1 00:15:21.863 }, 00:15:21.863 { 00:15:21.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.863 "dma_device_type": 2 00:15:21.863 } 00:15:21.863 ], 00:15:21.863 "driver_specific": {} 00:15:21.863 } 00:15:21.863 ] 00:15:21.863 06:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:21.863 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:21.863 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:21.863 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:21.863 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:21.863 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:21.863 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:21.863 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.863 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.863 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.863 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.863 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.863 06:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.122 06:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.122 "name": "Existed_Raid", 00:15:22.122 "uuid": "cfea8673-37d3-4122-ae22-e8bdfde8dd3a", 00:15:22.122 "strip_size_kb": 64, 00:15:22.122 "state": "configuring", 00:15:22.122 "raid_level": "concat", 00:15:22.122 "superblock": true, 00:15:22.122 "num_base_bdevs": 4, 00:15:22.122 "num_base_bdevs_discovered": 3, 00:15:22.122 "num_base_bdevs_operational": 4, 00:15:22.122 "base_bdevs_list": [ 00:15:22.122 { 00:15:22.122 "name": "BaseBdev1", 00:15:22.122 "uuid": "614a3f69-b897-4181-b272-279d12f164a0", 00:15:22.122 "is_configured": true, 00:15:22.122 "data_offset": 2048, 00:15:22.122 "data_size": 63488 00:15:22.122 }, 00:15:22.122 { 00:15:22.122 "name": null, 00:15:22.122 "uuid": "a0edb929-71a7-423e-8576-6688eb5c13d1", 00:15:22.122 "is_configured": false, 00:15:22.122 "data_offset": 2048, 00:15:22.122 "data_size": 63488 00:15:22.122 }, 00:15:22.122 { 00:15:22.122 "name": "BaseBdev3", 00:15:22.122 "uuid": "d5a8eb83-3ea0-4f8a-ad07-bfab5556ac05", 00:15:22.122 "is_configured": true, 00:15:22.122 "data_offset": 2048, 00:15:22.122 "data_size": 63488 00:15:22.122 }, 00:15:22.122 { 00:15:22.122 "name": "BaseBdev4", 00:15:22.122 "uuid": "03507309-eb47-4688-9e8b-1063998153f8", 00:15:22.122 "is_configured": true, 00:15:22.122 "data_offset": 2048, 00:15:22.122 "data_size": 63488 00:15:22.122 } 00:15:22.122 ] 00:15:22.122 }' 00:15:22.122 06:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.122 06:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.690 06:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.690 06:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:22.949 06:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:22.949 06:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:22.949 [2024-08-14 06:46:50.138386] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:22.950 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:22.950 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:22.950 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:22.950 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:22.950 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.950 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:22.950 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.950 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.950 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.950 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.950 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.950 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.208 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.208 "name": "Existed_Raid", 00:15:23.208 "uuid": "cfea8673-37d3-4122-ae22-e8bdfde8dd3a", 00:15:23.208 "strip_size_kb": 64, 00:15:23.208 "state": "configuring", 00:15:23.208 "raid_level": "concat", 00:15:23.208 "superblock": true, 00:15:23.209 "num_base_bdevs": 4, 00:15:23.209 "num_base_bdevs_discovered": 2, 00:15:23.209 "num_base_bdevs_operational": 4, 00:15:23.209 "base_bdevs_list": [ 00:15:23.209 { 00:15:23.209 "name": "BaseBdev1", 00:15:23.209 "uuid": "614a3f69-b897-4181-b272-279d12f164a0", 00:15:23.209 "is_configured": true, 00:15:23.209 "data_offset": 2048, 00:15:23.209 "data_size": 63488 00:15:23.209 }, 00:15:23.209 { 00:15:23.209 "name": null, 00:15:23.209 "uuid": "a0edb929-71a7-423e-8576-6688eb5c13d1", 00:15:23.209 "is_configured": false, 00:15:23.209 "data_offset": 2048, 00:15:23.209 "data_size": 63488 00:15:23.209 }, 00:15:23.209 { 00:15:23.209 "name": null, 00:15:23.209 "uuid": "d5a8eb83-3ea0-4f8a-ad07-bfab5556ac05", 00:15:23.209 "is_configured": false, 00:15:23.209 "data_offset": 2048, 00:15:23.209 "data_size": 63488 00:15:23.209 }, 00:15:23.209 { 00:15:23.209 "name": "BaseBdev4", 00:15:23.209 "uuid": "03507309-eb47-4688-9e8b-1063998153f8", 00:15:23.209 "is_configured": true, 00:15:23.209 "data_offset": 2048, 00:15:23.209 "data_size": 63488 00:15:23.209 } 00:15:23.209 ] 00:15:23.209 }' 00:15:23.209 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.209 06:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.776 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.776 06:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:24.034 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:24.034 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:24.293 [2024-08-14 06:46:51.336481] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.293 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:24.293 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:24.293 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:24.294 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:24.294 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:24.294 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:24.294 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:24.294 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:24.294 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:24.294 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:24.294 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.294 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.552 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:24.552 "name": "Existed_Raid", 00:15:24.552 "uuid": "cfea8673-37d3-4122-ae22-e8bdfde8dd3a", 00:15:24.552 "strip_size_kb": 64, 00:15:24.552 "state": "configuring", 00:15:24.552 "raid_level": "concat", 00:15:24.552 "superblock": true, 00:15:24.552 "num_base_bdevs": 4, 00:15:24.552 "num_base_bdevs_discovered": 3, 00:15:24.552 "num_base_bdevs_operational": 4, 00:15:24.552 "base_bdevs_list": [ 00:15:24.552 { 00:15:24.552 "name": "BaseBdev1", 00:15:24.552 "uuid": "614a3f69-b897-4181-b272-279d12f164a0", 00:15:24.552 "is_configured": true, 00:15:24.552 "data_offset": 2048, 00:15:24.552 "data_size": 63488 00:15:24.552 }, 00:15:24.552 { 00:15:24.552 "name": null, 00:15:24.552 "uuid": "a0edb929-71a7-423e-8576-6688eb5c13d1", 00:15:24.552 "is_configured": false, 00:15:24.552 "data_offset": 2048, 00:15:24.552 "data_size": 63488 00:15:24.552 }, 00:15:24.552 { 00:15:24.552 "name": "BaseBdev3", 00:15:24.552 "uuid": "d5a8eb83-3ea0-4f8a-ad07-bfab5556ac05", 00:15:24.552 "is_configured": true, 00:15:24.552 "data_offset": 2048, 00:15:24.552 "data_size": 63488 00:15:24.552 }, 00:15:24.552 { 00:15:24.552 "name": "BaseBdev4", 00:15:24.552 "uuid": "03507309-eb47-4688-9e8b-1063998153f8", 00:15:24.552 "is_configured": true, 00:15:24.552 "data_offset": 2048, 00:15:24.552 "data_size": 63488 00:15:24.552 } 00:15:24.552 ] 00:15:24.552 }' 00:15:24.552 06:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:24.552 06:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.119 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.119 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:25.119 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:25.119 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:25.378 [2024-08-14 06:46:52.538417] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.378 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:25.378 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:25.378 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:25.378 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:25.378 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:25.378 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:25.378 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.378 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.378 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.378 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.378 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.378 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.637 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.637 "name": "Existed_Raid", 00:15:25.637 "uuid": "cfea8673-37d3-4122-ae22-e8bdfde8dd3a", 00:15:25.637 "strip_size_kb": 64, 00:15:25.637 "state": "configuring", 00:15:25.637 "raid_level": "concat", 00:15:25.637 "superblock": true, 00:15:25.637 "num_base_bdevs": 4, 00:15:25.637 "num_base_bdevs_discovered": 2, 00:15:25.637 "num_base_bdevs_operational": 4, 00:15:25.637 "base_bdevs_list": [ 00:15:25.637 { 00:15:25.637 "name": null, 00:15:25.637 "uuid": "614a3f69-b897-4181-b272-279d12f164a0", 00:15:25.637 "is_configured": false, 00:15:25.637 "data_offset": 2048, 00:15:25.637 "data_size": 63488 00:15:25.637 }, 00:15:25.637 { 00:15:25.637 "name": null, 00:15:25.637 "uuid": "a0edb929-71a7-423e-8576-6688eb5c13d1", 00:15:25.637 "is_configured": false, 00:15:25.637 "data_offset": 2048, 00:15:25.637 "data_size": 63488 00:15:25.637 }, 00:15:25.637 { 00:15:25.637 "name": "BaseBdev3", 00:15:25.637 "uuid": "d5a8eb83-3ea0-4f8a-ad07-bfab5556ac05", 00:15:25.637 "is_configured": true, 00:15:25.637 "data_offset": 2048, 00:15:25.638 "data_size": 63488 00:15:25.638 }, 00:15:25.638 { 00:15:25.638 "name": "BaseBdev4", 00:15:25.638 "uuid": "03507309-eb47-4688-9e8b-1063998153f8", 00:15:25.638 "is_configured": true, 00:15:25.638 "data_offset": 2048, 00:15:25.638 "data_size": 63488 00:15:25.638 } 00:15:25.638 ] 00:15:25.638 }' 00:15:25.638 06:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.638 06:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.206 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:26.206 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.465 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:26.465 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:26.724 [2024-08-14 06:46:53.731447] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.724 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:26.724 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:26.724 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:26.724 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:26.724 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:26.724 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:26.724 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.724 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.724 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.724 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.724 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.724 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.724 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.724 "name": "Existed_Raid", 00:15:26.724 "uuid": "cfea8673-37d3-4122-ae22-e8bdfde8dd3a", 00:15:26.724 "strip_size_kb": 64, 00:15:26.724 "state": "configuring", 00:15:26.724 "raid_level": "concat", 00:15:26.724 "superblock": true, 00:15:26.724 "num_base_bdevs": 4, 00:15:26.724 "num_base_bdevs_discovered": 3, 00:15:26.724 "num_base_bdevs_operational": 4, 00:15:26.724 "base_bdevs_list": [ 00:15:26.724 { 00:15:26.724 "name": null, 00:15:26.724 "uuid": "614a3f69-b897-4181-b272-279d12f164a0", 00:15:26.724 "is_configured": false, 00:15:26.724 "data_offset": 2048, 00:15:26.724 "data_size": 63488 00:15:26.724 }, 00:15:26.724 { 00:15:26.724 "name": "BaseBdev2", 00:15:26.724 "uuid": "a0edb929-71a7-423e-8576-6688eb5c13d1", 00:15:26.724 "is_configured": true, 00:15:26.724 "data_offset": 2048, 00:15:26.724 "data_size": 63488 00:15:26.724 }, 00:15:26.724 { 00:15:26.724 "name": "BaseBdev3", 00:15:26.724 "uuid": "d5a8eb83-3ea0-4f8a-ad07-bfab5556ac05", 00:15:26.724 "is_configured": true, 00:15:26.724 "data_offset": 2048, 00:15:26.724 "data_size": 63488 00:15:26.724 }, 00:15:26.724 { 00:15:26.724 "name": "BaseBdev4", 00:15:26.724 "uuid": "03507309-eb47-4688-9e8b-1063998153f8", 00:15:26.724 "is_configured": true, 00:15:26.724 "data_offset": 2048, 00:15:26.724 "data_size": 63488 00:15:26.724 } 00:15:26.724 ] 00:15:26.724 }' 00:15:26.724 06:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.724 06:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.292 06:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:27.551 06:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.551 06:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:27.551 06:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.551 06:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:27.810 06:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 614a3f69-b897-4181-b272-279d12f164a0 00:15:28.070 [2024-08-14 06:46:55.176515] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:28.070 [2024-08-14 06:46:55.176841] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:28.070 [2024-08-14 06:46:55.176901] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:28.070 [2024-08-14 06:46:55.177224] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:15:28.070 [2024-08-14 06:46:55.177393] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:28.070 [2024-08-14 06:46:55.177446] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:15:28.070 [2024-08-14 06:46:55.177602] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.070 NewBaseBdev 00:15:28.070 06:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:28.070 06:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:15:28.070 06:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:28.070 06:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:28.070 06:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:28.070 06:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:28.070 06:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:28.328 06:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:28.588 [ 00:15:28.588 { 00:15:28.588 "name": "NewBaseBdev", 00:15:28.588 "aliases": [ 00:15:28.588 "614a3f69-b897-4181-b272-279d12f164a0" 00:15:28.588 ], 00:15:28.588 "product_name": "Malloc disk", 00:15:28.588 "block_size": 512, 00:15:28.588 "num_blocks": 65536, 00:15:28.588 "uuid": "614a3f69-b897-4181-b272-279d12f164a0", 00:15:28.588 "assigned_rate_limits": { 00:15:28.588 "rw_ios_per_sec": 0, 00:15:28.588 "rw_mbytes_per_sec": 0, 00:15:28.588 "r_mbytes_per_sec": 0, 00:15:28.588 "w_mbytes_per_sec": 0 00:15:28.588 }, 00:15:28.588 "claimed": true, 00:15:28.588 "claim_type": "exclusive_write", 00:15:28.588 "zoned": false, 00:15:28.588 "supported_io_types": { 00:15:28.589 "read": true, 00:15:28.589 "write": true, 00:15:28.589 "unmap": true, 00:15:28.589 "flush": true, 00:15:28.589 "reset": true, 00:15:28.589 "nvme_admin": false, 00:15:28.589 "nvme_io": false, 00:15:28.589 "nvme_io_md": false, 00:15:28.589 "write_zeroes": true, 00:15:28.589 "zcopy": true, 00:15:28.589 "get_zone_info": false, 00:15:28.589 "zone_management": false, 00:15:28.589 "zone_append": false, 00:15:28.589 "compare": false, 00:15:28.589 "compare_and_write": false, 00:15:28.589 "abort": true, 00:15:28.589 "seek_hole": false, 00:15:28.589 "seek_data": false, 00:15:28.589 "copy": true, 00:15:28.589 "nvme_iov_md": false 00:15:28.589 }, 00:15:28.589 "memory_domains": [ 00:15:28.589 { 00:15:28.589 "dma_device_id": "system", 00:15:28.589 "dma_device_type": 1 00:15:28.589 }, 00:15:28.589 { 00:15:28.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.589 "dma_device_type": 2 00:15:28.589 } 00:15:28.589 ], 00:15:28.589 "driver_specific": {} 00:15:28.589 } 00:15:28.589 ] 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:28.589 "name": "Existed_Raid", 00:15:28.589 "uuid": "cfea8673-37d3-4122-ae22-e8bdfde8dd3a", 00:15:28.589 "strip_size_kb": 64, 00:15:28.589 "state": "online", 00:15:28.589 "raid_level": "concat", 00:15:28.589 "superblock": true, 00:15:28.589 "num_base_bdevs": 4, 00:15:28.589 "num_base_bdevs_discovered": 4, 00:15:28.589 "num_base_bdevs_operational": 4, 00:15:28.589 "base_bdevs_list": [ 00:15:28.589 { 00:15:28.589 "name": "NewBaseBdev", 00:15:28.589 "uuid": "614a3f69-b897-4181-b272-279d12f164a0", 00:15:28.589 "is_configured": true, 00:15:28.589 "data_offset": 2048, 00:15:28.589 "data_size": 63488 00:15:28.589 }, 00:15:28.589 { 00:15:28.589 "name": "BaseBdev2", 00:15:28.589 "uuid": "a0edb929-71a7-423e-8576-6688eb5c13d1", 00:15:28.589 "is_configured": true, 00:15:28.589 "data_offset": 2048, 00:15:28.589 "data_size": 63488 00:15:28.589 }, 00:15:28.589 { 00:15:28.589 "name": "BaseBdev3", 00:15:28.589 "uuid": "d5a8eb83-3ea0-4f8a-ad07-bfab5556ac05", 00:15:28.589 "is_configured": true, 00:15:28.589 "data_offset": 2048, 00:15:28.589 "data_size": 63488 00:15:28.589 }, 00:15:28.589 { 00:15:28.589 "name": "BaseBdev4", 00:15:28.589 "uuid": "03507309-eb47-4688-9e8b-1063998153f8", 00:15:28.589 "is_configured": true, 00:15:28.589 "data_offset": 2048, 00:15:28.589 "data_size": 63488 00:15:28.589 } 00:15:28.589 ] 00:15:28.589 }' 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:28.589 06:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.156 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:29.156 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:29.156 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:29.156 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:29.156 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:29.156 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:29.156 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:29.156 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:29.538 [2024-08-14 06:46:56.542773] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.539 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:29.539 "name": "Existed_Raid", 00:15:29.539 "aliases": [ 00:15:29.539 "cfea8673-37d3-4122-ae22-e8bdfde8dd3a" 00:15:29.539 ], 00:15:29.539 "product_name": "Raid Volume", 00:15:29.539 "block_size": 512, 00:15:29.539 "num_blocks": 253952, 00:15:29.539 "uuid": "cfea8673-37d3-4122-ae22-e8bdfde8dd3a", 00:15:29.539 "assigned_rate_limits": { 00:15:29.539 "rw_ios_per_sec": 0, 00:15:29.539 "rw_mbytes_per_sec": 0, 00:15:29.539 "r_mbytes_per_sec": 0, 00:15:29.539 "w_mbytes_per_sec": 0 00:15:29.539 }, 00:15:29.539 "claimed": false, 00:15:29.539 "zoned": false, 00:15:29.539 "supported_io_types": { 00:15:29.539 "read": true, 00:15:29.539 "write": true, 00:15:29.539 "unmap": true, 00:15:29.539 "flush": true, 00:15:29.539 "reset": true, 00:15:29.539 "nvme_admin": false, 00:15:29.539 "nvme_io": false, 00:15:29.539 "nvme_io_md": false, 00:15:29.539 "write_zeroes": true, 00:15:29.539 "zcopy": false, 00:15:29.539 "get_zone_info": false, 00:15:29.539 "zone_management": false, 00:15:29.539 "zone_append": false, 00:15:29.539 "compare": false, 00:15:29.539 "compare_and_write": false, 00:15:29.539 "abort": false, 00:15:29.539 "seek_hole": false, 00:15:29.539 "seek_data": false, 00:15:29.539 "copy": false, 00:15:29.539 "nvme_iov_md": false 00:15:29.539 }, 00:15:29.539 "memory_domains": [ 00:15:29.539 { 00:15:29.539 "dma_device_id": "system", 00:15:29.539 "dma_device_type": 1 00:15:29.539 }, 00:15:29.539 { 00:15:29.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.539 "dma_device_type": 2 00:15:29.539 }, 00:15:29.539 { 00:15:29.539 "dma_device_id": "system", 00:15:29.539 "dma_device_type": 1 00:15:29.539 }, 00:15:29.539 { 00:15:29.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.539 "dma_device_type": 2 00:15:29.539 }, 00:15:29.539 { 00:15:29.539 "dma_device_id": "system", 00:15:29.539 "dma_device_type": 1 00:15:29.539 }, 00:15:29.539 { 00:15:29.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.539 "dma_device_type": 2 00:15:29.539 }, 00:15:29.539 { 00:15:29.539 "dma_device_id": "system", 00:15:29.539 "dma_device_type": 1 00:15:29.539 }, 00:15:29.539 { 00:15:29.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.539 "dma_device_type": 2 00:15:29.539 } 00:15:29.539 ], 00:15:29.539 "driver_specific": { 00:15:29.539 "raid": { 00:15:29.539 "uuid": "cfea8673-37d3-4122-ae22-e8bdfde8dd3a", 00:15:29.539 "strip_size_kb": 64, 00:15:29.539 "state": "online", 00:15:29.539 "raid_level": "concat", 00:15:29.539 "superblock": true, 00:15:29.539 "num_base_bdevs": 4, 00:15:29.539 "num_base_bdevs_discovered": 4, 00:15:29.539 "num_base_bdevs_operational": 4, 00:15:29.539 "base_bdevs_list": [ 00:15:29.539 { 00:15:29.539 "name": "NewBaseBdev", 00:15:29.539 "uuid": "614a3f69-b897-4181-b272-279d12f164a0", 00:15:29.539 "is_configured": true, 00:15:29.539 "data_offset": 2048, 00:15:29.539 "data_size": 63488 00:15:29.539 }, 00:15:29.539 { 00:15:29.539 "name": "BaseBdev2", 00:15:29.539 "uuid": "a0edb929-71a7-423e-8576-6688eb5c13d1", 00:15:29.539 "is_configured": true, 00:15:29.539 "data_offset": 2048, 00:15:29.539 "data_size": 63488 00:15:29.539 }, 00:15:29.539 { 00:15:29.539 "name": "BaseBdev3", 00:15:29.539 "uuid": "d5a8eb83-3ea0-4f8a-ad07-bfab5556ac05", 00:15:29.539 "is_configured": true, 00:15:29.539 "data_offset": 2048, 00:15:29.539 "data_size": 63488 00:15:29.539 }, 00:15:29.539 { 00:15:29.539 "name": "BaseBdev4", 00:15:29.539 "uuid": "03507309-eb47-4688-9e8b-1063998153f8", 00:15:29.539 "is_configured": true, 00:15:29.539 "data_offset": 2048, 00:15:29.539 "data_size": 63488 00:15:29.539 } 00:15:29.539 ] 00:15:29.539 } 00:15:29.539 } 00:15:29.539 }' 00:15:29.539 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:29.539 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:29.539 BaseBdev2 00:15:29.539 BaseBdev3 00:15:29.539 BaseBdev4' 00:15:29.539 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:29.539 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:29.539 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:29.822 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:29.822 "name": "NewBaseBdev", 00:15:29.822 "aliases": [ 00:15:29.822 "614a3f69-b897-4181-b272-279d12f164a0" 00:15:29.822 ], 00:15:29.822 "product_name": "Malloc disk", 00:15:29.822 "block_size": 512, 00:15:29.822 "num_blocks": 65536, 00:15:29.822 "uuid": "614a3f69-b897-4181-b272-279d12f164a0", 00:15:29.822 "assigned_rate_limits": { 00:15:29.822 "rw_ios_per_sec": 0, 00:15:29.822 "rw_mbytes_per_sec": 0, 00:15:29.822 "r_mbytes_per_sec": 0, 00:15:29.822 "w_mbytes_per_sec": 0 00:15:29.822 }, 00:15:29.822 "claimed": true, 00:15:29.822 "claim_type": "exclusive_write", 00:15:29.822 "zoned": false, 00:15:29.822 "supported_io_types": { 00:15:29.822 "read": true, 00:15:29.822 "write": true, 00:15:29.822 "unmap": true, 00:15:29.822 "flush": true, 00:15:29.822 "reset": true, 00:15:29.822 "nvme_admin": false, 00:15:29.822 "nvme_io": false, 00:15:29.822 "nvme_io_md": false, 00:15:29.822 "write_zeroes": true, 00:15:29.822 "zcopy": true, 00:15:29.822 "get_zone_info": false, 00:15:29.822 "zone_management": false, 00:15:29.822 "zone_append": false, 00:15:29.822 "compare": false, 00:15:29.822 "compare_and_write": false, 00:15:29.822 "abort": true, 00:15:29.822 "seek_hole": false, 00:15:29.822 "seek_data": false, 00:15:29.822 "copy": true, 00:15:29.822 "nvme_iov_md": false 00:15:29.822 }, 00:15:29.822 "memory_domains": [ 00:15:29.822 { 00:15:29.822 "dma_device_id": "system", 00:15:29.822 "dma_device_type": 1 00:15:29.822 }, 00:15:29.822 { 00:15:29.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.822 "dma_device_type": 2 00:15:29.822 } 00:15:29.822 ], 00:15:29.822 "driver_specific": {} 00:15:29.822 }' 00:15:29.822 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:29.822 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:29.822 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:29.822 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:29.822 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:29.822 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:29.822 06:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:29.822 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:29.822 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:29.822 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.080 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.080 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:30.080 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:30.080 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:30.080 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:30.080 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:30.080 "name": "BaseBdev2", 00:15:30.080 "aliases": [ 00:15:30.080 "a0edb929-71a7-423e-8576-6688eb5c13d1" 00:15:30.080 ], 00:15:30.080 "product_name": "Malloc disk", 00:15:30.080 "block_size": 512, 00:15:30.080 "num_blocks": 65536, 00:15:30.080 "uuid": "a0edb929-71a7-423e-8576-6688eb5c13d1", 00:15:30.080 "assigned_rate_limits": { 00:15:30.080 "rw_ios_per_sec": 0, 00:15:30.080 "rw_mbytes_per_sec": 0, 00:15:30.080 "r_mbytes_per_sec": 0, 00:15:30.080 "w_mbytes_per_sec": 0 00:15:30.080 }, 00:15:30.080 "claimed": true, 00:15:30.080 "claim_type": "exclusive_write", 00:15:30.080 "zoned": false, 00:15:30.080 "supported_io_types": { 00:15:30.080 "read": true, 00:15:30.080 "write": true, 00:15:30.080 "unmap": true, 00:15:30.080 "flush": true, 00:15:30.080 "reset": true, 00:15:30.080 "nvme_admin": false, 00:15:30.080 "nvme_io": false, 00:15:30.080 "nvme_io_md": false, 00:15:30.080 "write_zeroes": true, 00:15:30.080 "zcopy": true, 00:15:30.080 "get_zone_info": false, 00:15:30.080 "zone_management": false, 00:15:30.080 "zone_append": false, 00:15:30.080 "compare": false, 00:15:30.080 "compare_and_write": false, 00:15:30.080 "abort": true, 00:15:30.080 "seek_hole": false, 00:15:30.080 "seek_data": false, 00:15:30.080 "copy": true, 00:15:30.080 "nvme_iov_md": false 00:15:30.080 }, 00:15:30.080 "memory_domains": [ 00:15:30.080 { 00:15:30.080 "dma_device_id": "system", 00:15:30.080 "dma_device_type": 1 00:15:30.080 }, 00:15:30.080 { 00:15:30.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.080 "dma_device_type": 2 00:15:30.080 } 00:15:30.080 ], 00:15:30.080 "driver_specific": {} 00:15:30.080 }' 00:15:30.080 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.338 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.338 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:30.338 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.338 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.338 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:30.338 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.338 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.595 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:30.595 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.596 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.596 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:30.596 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:30.596 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:30.596 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:30.853 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:30.853 "name": "BaseBdev3", 00:15:30.853 "aliases": [ 00:15:30.853 "d5a8eb83-3ea0-4f8a-ad07-bfab5556ac05" 00:15:30.853 ], 00:15:30.853 "product_name": "Malloc disk", 00:15:30.853 "block_size": 512, 00:15:30.853 "num_blocks": 65536, 00:15:30.853 "uuid": "d5a8eb83-3ea0-4f8a-ad07-bfab5556ac05", 00:15:30.853 "assigned_rate_limits": { 00:15:30.853 "rw_ios_per_sec": 0, 00:15:30.853 "rw_mbytes_per_sec": 0, 00:15:30.853 "r_mbytes_per_sec": 0, 00:15:30.853 "w_mbytes_per_sec": 0 00:15:30.853 }, 00:15:30.853 "claimed": true, 00:15:30.853 "claim_type": "exclusive_write", 00:15:30.853 "zoned": false, 00:15:30.853 "supported_io_types": { 00:15:30.853 "read": true, 00:15:30.853 "write": true, 00:15:30.853 "unmap": true, 00:15:30.853 "flush": true, 00:15:30.853 "reset": true, 00:15:30.854 "nvme_admin": false, 00:15:30.854 "nvme_io": false, 00:15:30.854 "nvme_io_md": false, 00:15:30.854 "write_zeroes": true, 00:15:30.854 "zcopy": true, 00:15:30.854 "get_zone_info": false, 00:15:30.854 "zone_management": false, 00:15:30.854 "zone_append": false, 00:15:30.854 "compare": false, 00:15:30.854 "compare_and_write": false, 00:15:30.854 "abort": true, 00:15:30.854 "seek_hole": false, 00:15:30.854 "seek_data": false, 00:15:30.854 "copy": true, 00:15:30.854 "nvme_iov_md": false 00:15:30.854 }, 00:15:30.854 "memory_domains": [ 00:15:30.854 { 00:15:30.854 "dma_device_id": "system", 00:15:30.854 "dma_device_type": 1 00:15:30.854 }, 00:15:30.854 { 00:15:30.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.854 "dma_device_type": 2 00:15:30.854 } 00:15:30.854 ], 00:15:30.854 "driver_specific": {} 00:15:30.854 }' 00:15:30.854 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.854 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.854 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:30.854 06:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.854 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.854 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:30.854 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.854 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:31.111 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:31.111 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:31.111 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:31.111 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:31.111 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:31.111 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:31.111 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:31.369 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:31.369 "name": "BaseBdev4", 00:15:31.369 "aliases": [ 00:15:31.369 "03507309-eb47-4688-9e8b-1063998153f8" 00:15:31.369 ], 00:15:31.369 "product_name": "Malloc disk", 00:15:31.369 "block_size": 512, 00:15:31.369 "num_blocks": 65536, 00:15:31.369 "uuid": "03507309-eb47-4688-9e8b-1063998153f8", 00:15:31.369 "assigned_rate_limits": { 00:15:31.369 "rw_ios_per_sec": 0, 00:15:31.369 "rw_mbytes_per_sec": 0, 00:15:31.369 "r_mbytes_per_sec": 0, 00:15:31.369 "w_mbytes_per_sec": 0 00:15:31.369 }, 00:15:31.369 "claimed": true, 00:15:31.369 "claim_type": "exclusive_write", 00:15:31.369 "zoned": false, 00:15:31.369 "supported_io_types": { 00:15:31.369 "read": true, 00:15:31.369 "write": true, 00:15:31.369 "unmap": true, 00:15:31.369 "flush": true, 00:15:31.369 "reset": true, 00:15:31.369 "nvme_admin": false, 00:15:31.369 "nvme_io": false, 00:15:31.369 "nvme_io_md": false, 00:15:31.369 "write_zeroes": true, 00:15:31.369 "zcopy": true, 00:15:31.369 "get_zone_info": false, 00:15:31.369 "zone_management": false, 00:15:31.369 "zone_append": false, 00:15:31.369 "compare": false, 00:15:31.369 "compare_and_write": false, 00:15:31.369 "abort": true, 00:15:31.369 "seek_hole": false, 00:15:31.369 "seek_data": false, 00:15:31.369 "copy": true, 00:15:31.369 "nvme_iov_md": false 00:15:31.369 }, 00:15:31.369 "memory_domains": [ 00:15:31.369 { 00:15:31.369 "dma_device_id": "system", 00:15:31.369 "dma_device_type": 1 00:15:31.369 }, 00:15:31.369 { 00:15:31.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.369 "dma_device_type": 2 00:15:31.369 } 00:15:31.369 ], 00:15:31.369 "driver_specific": {} 00:15:31.369 }' 00:15:31.369 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:31.369 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:31.369 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:31.369 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:31.369 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:31.369 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:31.369 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:31.626 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:31.626 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:31.626 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:31.626 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:31.626 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:31.626 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:31.884 [2024-08-14 06:46:58.954368] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.884 [2024-08-14 06:46:58.954419] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.884 [2024-08-14 06:46:58.954541] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.884 [2024-08-14 06:46:58.954615] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.884 [2024-08-14 06:46:58.954630] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:15:31.884 06:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 87420 00:15:31.884 06:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 87420 ']' 00:15:31.884 06:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 87420 00:15:31.884 06:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:15:31.884 06:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:31.884 06:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87420 00:15:31.884 killing process with pid 87420 00:15:31.884 06:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:31.884 06:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:31.884 06:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87420' 00:15:31.884 06:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 87420 00:15:31.884 [2024-08-14 06:46:59.000893] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.884 06:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 87420 00:15:31.884 [2024-08-14 06:46:59.043065] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.143 06:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:32.143 00:15:32.143 real 0m29.072s 00:15:32.143 user 0m54.146s 00:15:32.143 sys 0m4.325s 00:15:32.143 06:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:32.143 06:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.143 ************************************ 00:15:32.143 END TEST raid_state_function_test_sb 00:15:32.143 ************************************ 00:15:32.143 06:46:59 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:15:32.143 06:46:59 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:15:32.143 06:46:59 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:32.143 06:46:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.143 ************************************ 00:15:32.143 START TEST raid_superblock_test 00:15:32.143 ************************************ 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 4 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=88441 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 88441 /var/tmp/spdk-raid.sock 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 88441 ']' 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:32.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:32.143 06:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.402 [2024-08-14 06:46:59.445670] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:15:32.402 [2024-08-14 06:46:59.445813] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88441 ] 00:15:32.402 [2024-08-14 06:46:59.577137] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.402 [2024-08-14 06:46:59.628733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.660 [2024-08-14 06:46:59.673342] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.660 [2024-08-14 06:46:59.673394] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.227 06:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:33.227 06:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:15:33.227 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:15:33.227 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:33.227 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:15:33.227 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:15:33.228 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:33.228 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:33.228 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:33.228 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:33.228 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:33.485 malloc1 00:15:33.485 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:33.485 [2024-08-14 06:47:00.735290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:33.485 [2024-08-14 06:47:00.735381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.485 [2024-08-14 06:47:00.735409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:33.485 [2024-08-14 06:47:00.735429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.485 [2024-08-14 06:47:00.737798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.485 [2024-08-14 06:47:00.737860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:33.743 pt1 00:15:33.743 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:33.743 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:33.743 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:15:33.743 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:15:33.743 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:33.743 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:33.743 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:33.743 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:33.743 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:33.743 malloc2 00:15:33.743 06:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:34.001 [2024-08-14 06:47:01.179858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:34.001 [2024-08-14 06:47:01.179947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.001 [2024-08-14 06:47:01.179975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:34.001 [2024-08-14 06:47:01.179987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.001 [2024-08-14 06:47:01.182405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.001 [2024-08-14 06:47:01.182448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:34.001 pt2 00:15:34.001 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:34.001 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:34.001 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:15:34.001 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:15:34.001 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:34.001 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:34.001 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:34.001 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:34.001 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:34.259 malloc3 00:15:34.259 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:34.518 [2024-08-14 06:47:01.631604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:34.518 [2024-08-14 06:47:01.631709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.518 [2024-08-14 06:47:01.631738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:34.518 [2024-08-14 06:47:01.631750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.518 [2024-08-14 06:47:01.634135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.518 [2024-08-14 06:47:01.634193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:34.518 pt3 00:15:34.518 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:34.518 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:34.518 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:15:34.518 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:15:34.518 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:34.518 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:34.518 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:34.518 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:34.518 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:15:34.776 malloc4 00:15:34.776 06:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:35.035 [2024-08-14 06:47:02.088317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:35.035 [2024-08-14 06:47:02.088397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.035 [2024-08-14 06:47:02.088424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:35.035 [2024-08-14 06:47:02.088437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.035 [2024-08-14 06:47:02.090808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.035 [2024-08-14 06:47:02.090860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:35.035 pt4 00:15:35.035 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:35.035 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:35.035 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:15:35.294 [2024-08-14 06:47:02.311959] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:35.294 [2024-08-14 06:47:02.314035] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:35.294 [2024-08-14 06:47:02.314151] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:35.294 [2024-08-14 06:47:02.314235] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:35.294 [2024-08-14 06:47:02.314452] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:35.294 [2024-08-14 06:47:02.314483] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:35.294 [2024-08-14 06:47:02.314841] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:35.294 [2024-08-14 06:47:02.315033] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:35.294 [2024-08-14 06:47:02.315057] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:35.294 [2024-08-14 06:47:02.315258] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.294 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:35.294 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:35.294 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:35.294 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:35.294 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:35.294 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:35.294 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.294 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.294 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.294 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.294 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.294 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.552 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.552 "name": "raid_bdev1", 00:15:35.553 "uuid": "d9dc2e27-91e5-4938-9875-a6094f82e71f", 00:15:35.553 "strip_size_kb": 64, 00:15:35.553 "state": "online", 00:15:35.553 "raid_level": "concat", 00:15:35.553 "superblock": true, 00:15:35.553 "num_base_bdevs": 4, 00:15:35.553 "num_base_bdevs_discovered": 4, 00:15:35.553 "num_base_bdevs_operational": 4, 00:15:35.553 "base_bdevs_list": [ 00:15:35.553 { 00:15:35.553 "name": "pt1", 00:15:35.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.553 "is_configured": true, 00:15:35.553 "data_offset": 2048, 00:15:35.553 "data_size": 63488 00:15:35.553 }, 00:15:35.553 { 00:15:35.553 "name": "pt2", 00:15:35.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.553 "is_configured": true, 00:15:35.553 "data_offset": 2048, 00:15:35.553 "data_size": 63488 00:15:35.553 }, 00:15:35.553 { 00:15:35.553 "name": "pt3", 00:15:35.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.553 "is_configured": true, 00:15:35.553 "data_offset": 2048, 00:15:35.553 "data_size": 63488 00:15:35.553 }, 00:15:35.553 { 00:15:35.553 "name": "pt4", 00:15:35.553 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:35.553 "is_configured": true, 00:15:35.553 "data_offset": 2048, 00:15:35.553 "data_size": 63488 00:15:35.553 } 00:15:35.553 ] 00:15:35.553 }' 00:15:35.553 06:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.553 06:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.119 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:15:36.119 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:36.119 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:36.119 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:36.119 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:36.119 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:36.119 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:36.119 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:36.119 [2024-08-14 06:47:03.346638] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.119 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:36.119 "name": "raid_bdev1", 00:15:36.119 "aliases": [ 00:15:36.119 "d9dc2e27-91e5-4938-9875-a6094f82e71f" 00:15:36.119 ], 00:15:36.119 "product_name": "Raid Volume", 00:15:36.119 "block_size": 512, 00:15:36.119 "num_blocks": 253952, 00:15:36.119 "uuid": "d9dc2e27-91e5-4938-9875-a6094f82e71f", 00:15:36.119 "assigned_rate_limits": { 00:15:36.119 "rw_ios_per_sec": 0, 00:15:36.119 "rw_mbytes_per_sec": 0, 00:15:36.119 "r_mbytes_per_sec": 0, 00:15:36.119 "w_mbytes_per_sec": 0 00:15:36.119 }, 00:15:36.119 "claimed": false, 00:15:36.119 "zoned": false, 00:15:36.119 "supported_io_types": { 00:15:36.119 "read": true, 00:15:36.119 "write": true, 00:15:36.119 "unmap": true, 00:15:36.119 "flush": true, 00:15:36.119 "reset": true, 00:15:36.119 "nvme_admin": false, 00:15:36.119 "nvme_io": false, 00:15:36.119 "nvme_io_md": false, 00:15:36.119 "write_zeroes": true, 00:15:36.119 "zcopy": false, 00:15:36.119 "get_zone_info": false, 00:15:36.119 "zone_management": false, 00:15:36.119 "zone_append": false, 00:15:36.119 "compare": false, 00:15:36.119 "compare_and_write": false, 00:15:36.119 "abort": false, 00:15:36.119 "seek_hole": false, 00:15:36.119 "seek_data": false, 00:15:36.119 "copy": false, 00:15:36.119 "nvme_iov_md": false 00:15:36.119 }, 00:15:36.119 "memory_domains": [ 00:15:36.119 { 00:15:36.119 "dma_device_id": "system", 00:15:36.119 "dma_device_type": 1 00:15:36.119 }, 00:15:36.119 { 00:15:36.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.119 "dma_device_type": 2 00:15:36.119 }, 00:15:36.120 { 00:15:36.120 "dma_device_id": "system", 00:15:36.120 "dma_device_type": 1 00:15:36.120 }, 00:15:36.120 { 00:15:36.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.120 "dma_device_type": 2 00:15:36.120 }, 00:15:36.120 { 00:15:36.120 "dma_device_id": "system", 00:15:36.120 "dma_device_type": 1 00:15:36.120 }, 00:15:36.120 { 00:15:36.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.120 "dma_device_type": 2 00:15:36.120 }, 00:15:36.120 { 00:15:36.120 "dma_device_id": "system", 00:15:36.120 "dma_device_type": 1 00:15:36.120 }, 00:15:36.120 { 00:15:36.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.120 "dma_device_type": 2 00:15:36.120 } 00:15:36.120 ], 00:15:36.120 "driver_specific": { 00:15:36.120 "raid": { 00:15:36.120 "uuid": "d9dc2e27-91e5-4938-9875-a6094f82e71f", 00:15:36.120 "strip_size_kb": 64, 00:15:36.120 "state": "online", 00:15:36.120 "raid_level": "concat", 00:15:36.120 "superblock": true, 00:15:36.120 "num_base_bdevs": 4, 00:15:36.120 "num_base_bdevs_discovered": 4, 00:15:36.120 "num_base_bdevs_operational": 4, 00:15:36.120 "base_bdevs_list": [ 00:15:36.120 { 00:15:36.120 "name": "pt1", 00:15:36.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.120 "is_configured": true, 00:15:36.120 "data_offset": 2048, 00:15:36.120 "data_size": 63488 00:15:36.120 }, 00:15:36.120 { 00:15:36.120 "name": "pt2", 00:15:36.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.120 "is_configured": true, 00:15:36.120 "data_offset": 2048, 00:15:36.120 "data_size": 63488 00:15:36.120 }, 00:15:36.120 { 00:15:36.120 "name": "pt3", 00:15:36.120 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.120 "is_configured": true, 00:15:36.120 "data_offset": 2048, 00:15:36.120 "data_size": 63488 00:15:36.120 }, 00:15:36.120 { 00:15:36.120 "name": "pt4", 00:15:36.120 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:36.120 "is_configured": true, 00:15:36.120 "data_offset": 2048, 00:15:36.120 "data_size": 63488 00:15:36.120 } 00:15:36.120 ] 00:15:36.120 } 00:15:36.120 } 00:15:36.120 }' 00:15:36.120 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.378 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:36.378 pt2 00:15:36.378 pt3 00:15:36.378 pt4' 00:15:36.378 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:36.378 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:36.378 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:36.637 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:36.637 "name": "pt1", 00:15:36.637 "aliases": [ 00:15:36.637 "00000000-0000-0000-0000-000000000001" 00:15:36.637 ], 00:15:36.637 "product_name": "passthru", 00:15:36.637 "block_size": 512, 00:15:36.637 "num_blocks": 65536, 00:15:36.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.637 "assigned_rate_limits": { 00:15:36.637 "rw_ios_per_sec": 0, 00:15:36.637 "rw_mbytes_per_sec": 0, 00:15:36.637 "r_mbytes_per_sec": 0, 00:15:36.637 "w_mbytes_per_sec": 0 00:15:36.637 }, 00:15:36.637 "claimed": true, 00:15:36.637 "claim_type": "exclusive_write", 00:15:36.637 "zoned": false, 00:15:36.637 "supported_io_types": { 00:15:36.637 "read": true, 00:15:36.637 "write": true, 00:15:36.637 "unmap": true, 00:15:36.637 "flush": true, 00:15:36.637 "reset": true, 00:15:36.637 "nvme_admin": false, 00:15:36.637 "nvme_io": false, 00:15:36.637 "nvme_io_md": false, 00:15:36.637 "write_zeroes": true, 00:15:36.637 "zcopy": true, 00:15:36.637 "get_zone_info": false, 00:15:36.637 "zone_management": false, 00:15:36.637 "zone_append": false, 00:15:36.637 "compare": false, 00:15:36.637 "compare_and_write": false, 00:15:36.637 "abort": true, 00:15:36.637 "seek_hole": false, 00:15:36.637 "seek_data": false, 00:15:36.637 "copy": true, 00:15:36.637 "nvme_iov_md": false 00:15:36.637 }, 00:15:36.637 "memory_domains": [ 00:15:36.637 { 00:15:36.637 "dma_device_id": "system", 00:15:36.637 "dma_device_type": 1 00:15:36.637 }, 00:15:36.637 { 00:15:36.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.637 "dma_device_type": 2 00:15:36.637 } 00:15:36.637 ], 00:15:36.637 "driver_specific": { 00:15:36.637 "passthru": { 00:15:36.637 "name": "pt1", 00:15:36.637 "base_bdev_name": "malloc1" 00:15:36.637 } 00:15:36.637 } 00:15:36.637 }' 00:15:36.637 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.637 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.637 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:36.637 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.637 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.637 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:36.637 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.637 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.637 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:36.637 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.896 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.896 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:36.896 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:36.896 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:36.896 06:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:36.896 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:36.896 "name": "pt2", 00:15:36.896 "aliases": [ 00:15:36.896 "00000000-0000-0000-0000-000000000002" 00:15:36.896 ], 00:15:36.896 "product_name": "passthru", 00:15:36.896 "block_size": 512, 00:15:36.896 "num_blocks": 65536, 00:15:36.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.896 "assigned_rate_limits": { 00:15:36.896 "rw_ios_per_sec": 0, 00:15:36.896 "rw_mbytes_per_sec": 0, 00:15:36.896 "r_mbytes_per_sec": 0, 00:15:36.896 "w_mbytes_per_sec": 0 00:15:36.896 }, 00:15:36.896 "claimed": true, 00:15:36.896 "claim_type": "exclusive_write", 00:15:36.896 "zoned": false, 00:15:36.896 "supported_io_types": { 00:15:36.896 "read": true, 00:15:36.896 "write": true, 00:15:36.896 "unmap": true, 00:15:36.896 "flush": true, 00:15:36.896 "reset": true, 00:15:36.896 "nvme_admin": false, 00:15:36.896 "nvme_io": false, 00:15:36.896 "nvme_io_md": false, 00:15:36.896 "write_zeroes": true, 00:15:36.896 "zcopy": true, 00:15:36.896 "get_zone_info": false, 00:15:36.896 "zone_management": false, 00:15:36.896 "zone_append": false, 00:15:36.896 "compare": false, 00:15:36.896 "compare_and_write": false, 00:15:36.896 "abort": true, 00:15:36.896 "seek_hole": false, 00:15:36.896 "seek_data": false, 00:15:36.896 "copy": true, 00:15:36.896 "nvme_iov_md": false 00:15:36.896 }, 00:15:36.896 "memory_domains": [ 00:15:36.896 { 00:15:36.896 "dma_device_id": "system", 00:15:36.896 "dma_device_type": 1 00:15:36.896 }, 00:15:36.896 { 00:15:36.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.896 "dma_device_type": 2 00:15:36.896 } 00:15:36.896 ], 00:15:36.896 "driver_specific": { 00:15:36.896 "passthru": { 00:15:36.896 "name": "pt2", 00:15:36.896 "base_bdev_name": "malloc2" 00:15:36.896 } 00:15:36.896 } 00:15:36.896 }' 00:15:36.896 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.155 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.155 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:37.155 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.155 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.155 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:37.155 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.155 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.413 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:37.413 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:37.413 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:37.413 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:37.413 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:37.413 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:37.413 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:37.673 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:37.673 "name": "pt3", 00:15:37.673 "aliases": [ 00:15:37.673 "00000000-0000-0000-0000-000000000003" 00:15:37.673 ], 00:15:37.673 "product_name": "passthru", 00:15:37.673 "block_size": 512, 00:15:37.673 "num_blocks": 65536, 00:15:37.673 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.673 "assigned_rate_limits": { 00:15:37.673 "rw_ios_per_sec": 0, 00:15:37.673 "rw_mbytes_per_sec": 0, 00:15:37.673 "r_mbytes_per_sec": 0, 00:15:37.673 "w_mbytes_per_sec": 0 00:15:37.673 }, 00:15:37.673 "claimed": true, 00:15:37.673 "claim_type": "exclusive_write", 00:15:37.673 "zoned": false, 00:15:37.673 "supported_io_types": { 00:15:37.673 "read": true, 00:15:37.673 "write": true, 00:15:37.673 "unmap": true, 00:15:37.673 "flush": true, 00:15:37.673 "reset": true, 00:15:37.673 "nvme_admin": false, 00:15:37.673 "nvme_io": false, 00:15:37.673 "nvme_io_md": false, 00:15:37.673 "write_zeroes": true, 00:15:37.673 "zcopy": true, 00:15:37.673 "get_zone_info": false, 00:15:37.673 "zone_management": false, 00:15:37.673 "zone_append": false, 00:15:37.673 "compare": false, 00:15:37.673 "compare_and_write": false, 00:15:37.673 "abort": true, 00:15:37.673 "seek_hole": false, 00:15:37.673 "seek_data": false, 00:15:37.673 "copy": true, 00:15:37.673 "nvme_iov_md": false 00:15:37.673 }, 00:15:37.673 "memory_domains": [ 00:15:37.673 { 00:15:37.673 "dma_device_id": "system", 00:15:37.673 "dma_device_type": 1 00:15:37.673 }, 00:15:37.673 { 00:15:37.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.673 "dma_device_type": 2 00:15:37.673 } 00:15:37.673 ], 00:15:37.673 "driver_specific": { 00:15:37.673 "passthru": { 00:15:37.673 "name": "pt3", 00:15:37.673 "base_bdev_name": "malloc3" 00:15:37.673 } 00:15:37.673 } 00:15:37.673 }' 00:15:37.673 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.673 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.673 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:37.673 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.673 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.673 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:37.673 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.932 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.932 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:37.932 06:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:37.932 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:37.932 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:37.932 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:37.932 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:37.932 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:38.190 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:38.190 "name": "pt4", 00:15:38.190 "aliases": [ 00:15:38.190 "00000000-0000-0000-0000-000000000004" 00:15:38.190 ], 00:15:38.190 "product_name": "passthru", 00:15:38.190 "block_size": 512, 00:15:38.190 "num_blocks": 65536, 00:15:38.190 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:38.190 "assigned_rate_limits": { 00:15:38.190 "rw_ios_per_sec": 0, 00:15:38.190 "rw_mbytes_per_sec": 0, 00:15:38.190 "r_mbytes_per_sec": 0, 00:15:38.190 "w_mbytes_per_sec": 0 00:15:38.190 }, 00:15:38.190 "claimed": true, 00:15:38.190 "claim_type": "exclusive_write", 00:15:38.190 "zoned": false, 00:15:38.190 "supported_io_types": { 00:15:38.190 "read": true, 00:15:38.190 "write": true, 00:15:38.190 "unmap": true, 00:15:38.190 "flush": true, 00:15:38.190 "reset": true, 00:15:38.190 "nvme_admin": false, 00:15:38.190 "nvme_io": false, 00:15:38.190 "nvme_io_md": false, 00:15:38.190 "write_zeroes": true, 00:15:38.190 "zcopy": true, 00:15:38.190 "get_zone_info": false, 00:15:38.190 "zone_management": false, 00:15:38.190 "zone_append": false, 00:15:38.190 "compare": false, 00:15:38.190 "compare_and_write": false, 00:15:38.190 "abort": true, 00:15:38.190 "seek_hole": false, 00:15:38.190 "seek_data": false, 00:15:38.190 "copy": true, 00:15:38.190 "nvme_iov_md": false 00:15:38.190 }, 00:15:38.190 "memory_domains": [ 00:15:38.190 { 00:15:38.190 "dma_device_id": "system", 00:15:38.190 "dma_device_type": 1 00:15:38.190 }, 00:15:38.190 { 00:15:38.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.190 "dma_device_type": 2 00:15:38.190 } 00:15:38.190 ], 00:15:38.190 "driver_specific": { 00:15:38.190 "passthru": { 00:15:38.190 "name": "pt4", 00:15:38.190 "base_bdev_name": "malloc4" 00:15:38.190 } 00:15:38.190 } 00:15:38.190 }' 00:15:38.190 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.190 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.190 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:38.190 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.190 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.190 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:38.190 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.448 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.449 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:38.449 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.449 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.449 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:38.449 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:38.449 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:15:38.707 [2024-08-14 06:47:05.798621] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.707 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=d9dc2e27-91e5-4938-9875-a6094f82e71f 00:15:38.707 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z d9dc2e27-91e5-4938-9875-a6094f82e71f ']' 00:15:38.707 06:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:38.965 [2024-08-14 06:47:06.021862] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:38.965 [2024-08-14 06:47:06.021908] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.965 [2024-08-14 06:47:06.022020] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.965 [2024-08-14 06:47:06.022107] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.965 [2024-08-14 06:47:06.022134] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:38.965 06:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.965 06:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:15:39.225 06:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:15:39.225 06:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:15:39.225 06:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:39.225 06:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:39.483 06:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:39.483 06:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:39.483 06:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:39.483 06:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:39.742 06:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:39.742 06:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:15:40.000 06:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:40.000 06:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:40.258 06:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:15:40.258 06:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:40.258 06:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:15:40.258 06:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:40.258 06:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.258 06:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:15:40.258 06:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.258 06:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:15:40.258 06:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.258 06:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:15:40.258 06:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.258 06:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:40.258 06:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:40.518 [2024-08-14 06:47:07.571489] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:40.518 [2024-08-14 06:47:07.573654] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:40.518 [2024-08-14 06:47:07.573721] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:40.518 [2024-08-14 06:47:07.573763] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:40.518 [2024-08-14 06:47:07.573824] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:40.518 [2024-08-14 06:47:07.574452] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:40.518 [2024-08-14 06:47:07.574565] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:40.518 [2024-08-14 06:47:07.574666] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:40.518 [2024-08-14 06:47:07.574743] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:40.518 [2024-08-14 06:47:07.574761] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:40.518 request: 00:15:40.518 { 00:15:40.518 "name": "raid_bdev1", 00:15:40.518 "raid_level": "concat", 00:15:40.518 "base_bdevs": [ 00:15:40.518 "malloc1", 00:15:40.518 "malloc2", 00:15:40.518 "malloc3", 00:15:40.518 "malloc4" 00:15:40.518 ], 00:15:40.518 "strip_size_kb": 64, 00:15:40.518 "superblock": false, 00:15:40.518 "method": "bdev_raid_create", 00:15:40.518 "req_id": 1 00:15:40.518 } 00:15:40.518 Got JSON-RPC error response 00:15:40.518 response: 00:15:40.518 { 00:15:40.518 "code": -17, 00:15:40.518 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:40.518 } 00:15:40.518 06:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:15:40.518 06:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:15:40.518 06:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:15:40.518 06:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:15:40.518 06:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.518 06:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:15:40.778 06:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:15:40.778 06:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:15:40.778 06:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:40.778 [2024-08-14 06:47:08.002984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:40.778 [2024-08-14 06:47:08.003279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.778 [2024-08-14 06:47:08.003372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:40.778 [2024-08-14 06:47:08.003447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.778 [2024-08-14 06:47:08.006057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.778 [2024-08-14 06:47:08.006239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:40.778 [2024-08-14 06:47:08.006414] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:40.778 [2024-08-14 06:47:08.006478] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:40.778 pt1 00:15:40.778 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:40.778 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:40.778 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:40.778 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:40.778 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:40.778 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:40.778 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:40.778 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:40.778 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:40.778 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:40.778 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.778 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.039 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:41.039 "name": "raid_bdev1", 00:15:41.039 "uuid": "d9dc2e27-91e5-4938-9875-a6094f82e71f", 00:15:41.039 "strip_size_kb": 64, 00:15:41.039 "state": "configuring", 00:15:41.039 "raid_level": "concat", 00:15:41.039 "superblock": true, 00:15:41.039 "num_base_bdevs": 4, 00:15:41.039 "num_base_bdevs_discovered": 1, 00:15:41.039 "num_base_bdevs_operational": 4, 00:15:41.039 "base_bdevs_list": [ 00:15:41.039 { 00:15:41.039 "name": "pt1", 00:15:41.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:41.039 "is_configured": true, 00:15:41.039 "data_offset": 2048, 00:15:41.039 "data_size": 63488 00:15:41.039 }, 00:15:41.039 { 00:15:41.039 "name": null, 00:15:41.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.039 "is_configured": false, 00:15:41.039 "data_offset": 2048, 00:15:41.039 "data_size": 63488 00:15:41.039 }, 00:15:41.039 { 00:15:41.039 "name": null, 00:15:41.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:41.039 "is_configured": false, 00:15:41.039 "data_offset": 2048, 00:15:41.039 "data_size": 63488 00:15:41.039 }, 00:15:41.039 { 00:15:41.039 "name": null, 00:15:41.039 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:41.039 "is_configured": false, 00:15:41.039 "data_offset": 2048, 00:15:41.039 "data_size": 63488 00:15:41.039 } 00:15:41.039 ] 00:15:41.039 }' 00:15:41.039 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:41.039 06:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.608 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:15:41.608 06:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:41.867 [2024-08-14 06:47:09.009303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:41.867 [2024-08-14 06:47:09.009800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.867 [2024-08-14 06:47:09.009851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:41.867 [2024-08-14 06:47:09.009869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.867 [2024-08-14 06:47:09.010377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.867 [2024-08-14 06:47:09.010415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:41.867 [2024-08-14 06:47:09.010515] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:41.867 [2024-08-14 06:47:09.010553] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:41.867 pt2 00:15:41.867 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:42.126 [2024-08-14 06:47:09.225012] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:42.126 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:42.126 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:42.126 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:42.126 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:42.126 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:42.126 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:42.126 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:42.126 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:42.126 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:42.126 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:42.126 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.126 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.386 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:42.386 "name": "raid_bdev1", 00:15:42.386 "uuid": "d9dc2e27-91e5-4938-9875-a6094f82e71f", 00:15:42.386 "strip_size_kb": 64, 00:15:42.386 "state": "configuring", 00:15:42.386 "raid_level": "concat", 00:15:42.386 "superblock": true, 00:15:42.386 "num_base_bdevs": 4, 00:15:42.386 "num_base_bdevs_discovered": 1, 00:15:42.386 "num_base_bdevs_operational": 4, 00:15:42.386 "base_bdevs_list": [ 00:15:42.386 { 00:15:42.386 "name": "pt1", 00:15:42.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:42.386 "is_configured": true, 00:15:42.386 "data_offset": 2048, 00:15:42.386 "data_size": 63488 00:15:42.386 }, 00:15:42.386 { 00:15:42.386 "name": null, 00:15:42.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:42.386 "is_configured": false, 00:15:42.386 "data_offset": 2048, 00:15:42.386 "data_size": 63488 00:15:42.386 }, 00:15:42.386 { 00:15:42.386 "name": null, 00:15:42.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:42.386 "is_configured": false, 00:15:42.386 "data_offset": 2048, 00:15:42.386 "data_size": 63488 00:15:42.386 }, 00:15:42.386 { 00:15:42.386 "name": null, 00:15:42.386 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:42.386 "is_configured": false, 00:15:42.386 "data_offset": 2048, 00:15:42.386 "data_size": 63488 00:15:42.386 } 00:15:42.386 ] 00:15:42.386 }' 00:15:42.386 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:42.386 06:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.953 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:15:42.953 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:42.954 06:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:42.954 [2024-08-14 06:47:10.199374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:42.954 [2024-08-14 06:47:10.199456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.954 [2024-08-14 06:47:10.199481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:42.954 [2024-08-14 06:47:10.199493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.954 [2024-08-14 06:47:10.199965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.954 [2024-08-14 06:47:10.199985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:42.954 [2024-08-14 06:47:10.200077] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:42.954 [2024-08-14 06:47:10.200100] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:42.954 pt2 00:15:43.212 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:15:43.212 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:43.212 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:43.212 [2024-08-14 06:47:10.411030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:43.212 [2024-08-14 06:47:10.411134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.212 [2024-08-14 06:47:10.411163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:43.212 [2024-08-14 06:47:10.411196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.212 [2024-08-14 06:47:10.411636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.212 [2024-08-14 06:47:10.411655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:43.212 [2024-08-14 06:47:10.411750] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:43.212 [2024-08-14 06:47:10.411775] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:43.212 pt3 00:15:43.212 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:15:43.212 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:43.212 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:43.472 [2024-08-14 06:47:10.618735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:43.472 [2024-08-14 06:47:10.618820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.472 [2024-08-14 06:47:10.618849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:43.472 [2024-08-14 06:47:10.618860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.472 [2024-08-14 06:47:10.619318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.472 [2024-08-14 06:47:10.619340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:43.472 [2024-08-14 06:47:10.619433] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:43.472 [2024-08-14 06:47:10.619459] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:43.472 [2024-08-14 06:47:10.619587] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:43.472 [2024-08-14 06:47:10.619596] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:43.472 [2024-08-14 06:47:10.619843] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:43.472 [2024-08-14 06:47:10.619969] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:43.472 [2024-08-14 06:47:10.619989] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:43.472 [2024-08-14 06:47:10.620094] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.472 pt4 00:15:43.472 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:15:43.472 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:43.472 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:43.472 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:43.472 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:43.472 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:43.472 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:43.472 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:43.472 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.472 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.472 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.472 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.472 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.472 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.731 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.731 "name": "raid_bdev1", 00:15:43.731 "uuid": "d9dc2e27-91e5-4938-9875-a6094f82e71f", 00:15:43.731 "strip_size_kb": 64, 00:15:43.731 "state": "online", 00:15:43.731 "raid_level": "concat", 00:15:43.731 "superblock": true, 00:15:43.731 "num_base_bdevs": 4, 00:15:43.731 "num_base_bdevs_discovered": 4, 00:15:43.731 "num_base_bdevs_operational": 4, 00:15:43.731 "base_bdevs_list": [ 00:15:43.731 { 00:15:43.731 "name": "pt1", 00:15:43.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:43.731 "is_configured": true, 00:15:43.731 "data_offset": 2048, 00:15:43.731 "data_size": 63488 00:15:43.731 }, 00:15:43.731 { 00:15:43.731 "name": "pt2", 00:15:43.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.731 "is_configured": true, 00:15:43.731 "data_offset": 2048, 00:15:43.731 "data_size": 63488 00:15:43.731 }, 00:15:43.731 { 00:15:43.731 "name": "pt3", 00:15:43.731 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:43.731 "is_configured": true, 00:15:43.731 "data_offset": 2048, 00:15:43.731 "data_size": 63488 00:15:43.731 }, 00:15:43.731 { 00:15:43.731 "name": "pt4", 00:15:43.731 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:43.731 "is_configured": true, 00:15:43.731 "data_offset": 2048, 00:15:43.731 "data_size": 63488 00:15:43.731 } 00:15:43.731 ] 00:15:43.731 }' 00:15:43.731 06:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.731 06:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.301 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:15:44.301 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:44.301 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:44.301 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:44.301 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:44.301 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:44.301 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:44.301 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:44.561 [2024-08-14 06:47:11.573596] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.561 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:44.561 "name": "raid_bdev1", 00:15:44.561 "aliases": [ 00:15:44.561 "d9dc2e27-91e5-4938-9875-a6094f82e71f" 00:15:44.561 ], 00:15:44.561 "product_name": "Raid Volume", 00:15:44.561 "block_size": 512, 00:15:44.561 "num_blocks": 253952, 00:15:44.561 "uuid": "d9dc2e27-91e5-4938-9875-a6094f82e71f", 00:15:44.561 "assigned_rate_limits": { 00:15:44.561 "rw_ios_per_sec": 0, 00:15:44.561 "rw_mbytes_per_sec": 0, 00:15:44.561 "r_mbytes_per_sec": 0, 00:15:44.561 "w_mbytes_per_sec": 0 00:15:44.561 }, 00:15:44.561 "claimed": false, 00:15:44.561 "zoned": false, 00:15:44.561 "supported_io_types": { 00:15:44.561 "read": true, 00:15:44.561 "write": true, 00:15:44.561 "unmap": true, 00:15:44.561 "flush": true, 00:15:44.561 "reset": true, 00:15:44.561 "nvme_admin": false, 00:15:44.561 "nvme_io": false, 00:15:44.561 "nvme_io_md": false, 00:15:44.561 "write_zeroes": true, 00:15:44.561 "zcopy": false, 00:15:44.561 "get_zone_info": false, 00:15:44.561 "zone_management": false, 00:15:44.561 "zone_append": false, 00:15:44.561 "compare": false, 00:15:44.561 "compare_and_write": false, 00:15:44.561 "abort": false, 00:15:44.561 "seek_hole": false, 00:15:44.561 "seek_data": false, 00:15:44.561 "copy": false, 00:15:44.561 "nvme_iov_md": false 00:15:44.561 }, 00:15:44.561 "memory_domains": [ 00:15:44.561 { 00:15:44.562 "dma_device_id": "system", 00:15:44.562 "dma_device_type": 1 00:15:44.562 }, 00:15:44.562 { 00:15:44.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.562 "dma_device_type": 2 00:15:44.562 }, 00:15:44.562 { 00:15:44.562 "dma_device_id": "system", 00:15:44.562 "dma_device_type": 1 00:15:44.562 }, 00:15:44.562 { 00:15:44.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.562 "dma_device_type": 2 00:15:44.562 }, 00:15:44.562 { 00:15:44.562 "dma_device_id": "system", 00:15:44.562 "dma_device_type": 1 00:15:44.562 }, 00:15:44.562 { 00:15:44.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.562 "dma_device_type": 2 00:15:44.562 }, 00:15:44.562 { 00:15:44.562 "dma_device_id": "system", 00:15:44.562 "dma_device_type": 1 00:15:44.562 }, 00:15:44.562 { 00:15:44.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.562 "dma_device_type": 2 00:15:44.562 } 00:15:44.562 ], 00:15:44.562 "driver_specific": { 00:15:44.562 "raid": { 00:15:44.562 "uuid": "d9dc2e27-91e5-4938-9875-a6094f82e71f", 00:15:44.562 "strip_size_kb": 64, 00:15:44.562 "state": "online", 00:15:44.562 "raid_level": "concat", 00:15:44.562 "superblock": true, 00:15:44.562 "num_base_bdevs": 4, 00:15:44.562 "num_base_bdevs_discovered": 4, 00:15:44.562 "num_base_bdevs_operational": 4, 00:15:44.562 "base_bdevs_list": [ 00:15:44.562 { 00:15:44.562 "name": "pt1", 00:15:44.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.562 "is_configured": true, 00:15:44.562 "data_offset": 2048, 00:15:44.562 "data_size": 63488 00:15:44.562 }, 00:15:44.562 { 00:15:44.562 "name": "pt2", 00:15:44.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.562 "is_configured": true, 00:15:44.562 "data_offset": 2048, 00:15:44.562 "data_size": 63488 00:15:44.562 }, 00:15:44.562 { 00:15:44.562 "name": "pt3", 00:15:44.562 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.562 "is_configured": true, 00:15:44.562 "data_offset": 2048, 00:15:44.562 "data_size": 63488 00:15:44.562 }, 00:15:44.562 { 00:15:44.562 "name": "pt4", 00:15:44.562 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:44.562 "is_configured": true, 00:15:44.562 "data_offset": 2048, 00:15:44.562 "data_size": 63488 00:15:44.562 } 00:15:44.562 ] 00:15:44.562 } 00:15:44.562 } 00:15:44.562 }' 00:15:44.562 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:44.562 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:44.562 pt2 00:15:44.562 pt3 00:15:44.562 pt4' 00:15:44.562 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:44.562 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:44.562 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:44.822 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:44.822 "name": "pt1", 00:15:44.822 "aliases": [ 00:15:44.822 "00000000-0000-0000-0000-000000000001" 00:15:44.822 ], 00:15:44.822 "product_name": "passthru", 00:15:44.822 "block_size": 512, 00:15:44.822 "num_blocks": 65536, 00:15:44.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.822 "assigned_rate_limits": { 00:15:44.822 "rw_ios_per_sec": 0, 00:15:44.822 "rw_mbytes_per_sec": 0, 00:15:44.822 "r_mbytes_per_sec": 0, 00:15:44.822 "w_mbytes_per_sec": 0 00:15:44.822 }, 00:15:44.822 "claimed": true, 00:15:44.822 "claim_type": "exclusive_write", 00:15:44.822 "zoned": false, 00:15:44.822 "supported_io_types": { 00:15:44.822 "read": true, 00:15:44.822 "write": true, 00:15:44.822 "unmap": true, 00:15:44.822 "flush": true, 00:15:44.822 "reset": true, 00:15:44.822 "nvme_admin": false, 00:15:44.822 "nvme_io": false, 00:15:44.822 "nvme_io_md": false, 00:15:44.822 "write_zeroes": true, 00:15:44.822 "zcopy": true, 00:15:44.822 "get_zone_info": false, 00:15:44.822 "zone_management": false, 00:15:44.822 "zone_append": false, 00:15:44.822 "compare": false, 00:15:44.822 "compare_and_write": false, 00:15:44.822 "abort": true, 00:15:44.822 "seek_hole": false, 00:15:44.822 "seek_data": false, 00:15:44.822 "copy": true, 00:15:44.822 "nvme_iov_md": false 00:15:44.822 }, 00:15:44.822 "memory_domains": [ 00:15:44.822 { 00:15:44.822 "dma_device_id": "system", 00:15:44.822 "dma_device_type": 1 00:15:44.822 }, 00:15:44.822 { 00:15:44.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.822 "dma_device_type": 2 00:15:44.822 } 00:15:44.822 ], 00:15:44.822 "driver_specific": { 00:15:44.822 "passthru": { 00:15:44.822 "name": "pt1", 00:15:44.822 "base_bdev_name": "malloc1" 00:15:44.822 } 00:15:44.822 } 00:15:44.822 }' 00:15:44.822 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:44.822 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:44.822 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:44.822 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:44.822 06:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:44.822 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:44.822 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:44.822 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:45.082 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:45.082 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:45.082 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:45.082 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:45.082 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:45.082 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:45.082 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:45.342 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:45.342 "name": "pt2", 00:15:45.342 "aliases": [ 00:15:45.342 "00000000-0000-0000-0000-000000000002" 00:15:45.342 ], 00:15:45.342 "product_name": "passthru", 00:15:45.342 "block_size": 512, 00:15:45.342 "num_blocks": 65536, 00:15:45.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.342 "assigned_rate_limits": { 00:15:45.342 "rw_ios_per_sec": 0, 00:15:45.342 "rw_mbytes_per_sec": 0, 00:15:45.342 "r_mbytes_per_sec": 0, 00:15:45.342 "w_mbytes_per_sec": 0 00:15:45.342 }, 00:15:45.342 "claimed": true, 00:15:45.342 "claim_type": "exclusive_write", 00:15:45.342 "zoned": false, 00:15:45.342 "supported_io_types": { 00:15:45.342 "read": true, 00:15:45.342 "write": true, 00:15:45.342 "unmap": true, 00:15:45.342 "flush": true, 00:15:45.342 "reset": true, 00:15:45.342 "nvme_admin": false, 00:15:45.342 "nvme_io": false, 00:15:45.342 "nvme_io_md": false, 00:15:45.342 "write_zeroes": true, 00:15:45.342 "zcopy": true, 00:15:45.342 "get_zone_info": false, 00:15:45.342 "zone_management": false, 00:15:45.342 "zone_append": false, 00:15:45.342 "compare": false, 00:15:45.342 "compare_and_write": false, 00:15:45.342 "abort": true, 00:15:45.342 "seek_hole": false, 00:15:45.342 "seek_data": false, 00:15:45.342 "copy": true, 00:15:45.342 "nvme_iov_md": false 00:15:45.342 }, 00:15:45.342 "memory_domains": [ 00:15:45.342 { 00:15:45.342 "dma_device_id": "system", 00:15:45.342 "dma_device_type": 1 00:15:45.342 }, 00:15:45.342 { 00:15:45.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.342 "dma_device_type": 2 00:15:45.342 } 00:15:45.342 ], 00:15:45.342 "driver_specific": { 00:15:45.342 "passthru": { 00:15:45.342 "name": "pt2", 00:15:45.342 "base_bdev_name": "malloc2" 00:15:45.342 } 00:15:45.342 } 00:15:45.342 }' 00:15:45.342 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:45.342 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:45.342 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:45.342 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:45.342 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:45.342 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:45.342 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:45.602 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:45.602 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:45.602 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:45.602 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:45.602 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:45.602 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:45.602 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:45.602 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:45.862 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:45.862 "name": "pt3", 00:15:45.862 "aliases": [ 00:15:45.862 "00000000-0000-0000-0000-000000000003" 00:15:45.862 ], 00:15:45.862 "product_name": "passthru", 00:15:45.862 "block_size": 512, 00:15:45.862 "num_blocks": 65536, 00:15:45.862 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.862 "assigned_rate_limits": { 00:15:45.862 "rw_ios_per_sec": 0, 00:15:45.863 "rw_mbytes_per_sec": 0, 00:15:45.863 "r_mbytes_per_sec": 0, 00:15:45.863 "w_mbytes_per_sec": 0 00:15:45.863 }, 00:15:45.863 "claimed": true, 00:15:45.863 "claim_type": "exclusive_write", 00:15:45.863 "zoned": false, 00:15:45.863 "supported_io_types": { 00:15:45.863 "read": true, 00:15:45.863 "write": true, 00:15:45.863 "unmap": true, 00:15:45.863 "flush": true, 00:15:45.863 "reset": true, 00:15:45.863 "nvme_admin": false, 00:15:45.863 "nvme_io": false, 00:15:45.863 "nvme_io_md": false, 00:15:45.863 "write_zeroes": true, 00:15:45.863 "zcopy": true, 00:15:45.863 "get_zone_info": false, 00:15:45.863 "zone_management": false, 00:15:45.863 "zone_append": false, 00:15:45.863 "compare": false, 00:15:45.863 "compare_and_write": false, 00:15:45.863 "abort": true, 00:15:45.863 "seek_hole": false, 00:15:45.863 "seek_data": false, 00:15:45.863 "copy": true, 00:15:45.863 "nvme_iov_md": false 00:15:45.863 }, 00:15:45.863 "memory_domains": [ 00:15:45.863 { 00:15:45.863 "dma_device_id": "system", 00:15:45.863 "dma_device_type": 1 00:15:45.863 }, 00:15:45.863 { 00:15:45.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.863 "dma_device_type": 2 00:15:45.863 } 00:15:45.863 ], 00:15:45.863 "driver_specific": { 00:15:45.863 "passthru": { 00:15:45.863 "name": "pt3", 00:15:45.863 "base_bdev_name": "malloc3" 00:15:45.863 } 00:15:45.863 } 00:15:45.863 }' 00:15:45.863 06:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:45.863 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:45.863 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:45.863 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:45.863 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:45.863 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:45.863 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.123 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.123 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:46.123 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.123 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.123 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:46.123 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:46.123 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:46.123 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:46.382 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:46.382 "name": "pt4", 00:15:46.382 "aliases": [ 00:15:46.382 "00000000-0000-0000-0000-000000000004" 00:15:46.382 ], 00:15:46.382 "product_name": "passthru", 00:15:46.382 "block_size": 512, 00:15:46.382 "num_blocks": 65536, 00:15:46.382 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:46.382 "assigned_rate_limits": { 00:15:46.382 "rw_ios_per_sec": 0, 00:15:46.382 "rw_mbytes_per_sec": 0, 00:15:46.382 "r_mbytes_per_sec": 0, 00:15:46.382 "w_mbytes_per_sec": 0 00:15:46.382 }, 00:15:46.382 "claimed": true, 00:15:46.382 "claim_type": "exclusive_write", 00:15:46.382 "zoned": false, 00:15:46.382 "supported_io_types": { 00:15:46.382 "read": true, 00:15:46.382 "write": true, 00:15:46.382 "unmap": true, 00:15:46.382 "flush": true, 00:15:46.382 "reset": true, 00:15:46.382 "nvme_admin": false, 00:15:46.382 "nvme_io": false, 00:15:46.382 "nvme_io_md": false, 00:15:46.382 "write_zeroes": true, 00:15:46.382 "zcopy": true, 00:15:46.382 "get_zone_info": false, 00:15:46.382 "zone_management": false, 00:15:46.382 "zone_append": false, 00:15:46.382 "compare": false, 00:15:46.382 "compare_and_write": false, 00:15:46.382 "abort": true, 00:15:46.382 "seek_hole": false, 00:15:46.382 "seek_data": false, 00:15:46.382 "copy": true, 00:15:46.382 "nvme_iov_md": false 00:15:46.382 }, 00:15:46.382 "memory_domains": [ 00:15:46.382 { 00:15:46.382 "dma_device_id": "system", 00:15:46.382 "dma_device_type": 1 00:15:46.382 }, 00:15:46.382 { 00:15:46.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.382 "dma_device_type": 2 00:15:46.382 } 00:15:46.382 ], 00:15:46.382 "driver_specific": { 00:15:46.382 "passthru": { 00:15:46.382 "name": "pt4", 00:15:46.382 "base_bdev_name": "malloc4" 00:15:46.382 } 00:15:46.382 } 00:15:46.382 }' 00:15:46.382 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.382 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.382 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:46.382 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.642 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.642 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:46.642 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.642 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.642 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:46.642 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.643 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.643 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:46.643 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:46.643 06:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:15:46.904 [2024-08-14 06:47:14.081406] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.904 06:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' d9dc2e27-91e5-4938-9875-a6094f82e71f '!=' d9dc2e27-91e5-4938-9875-a6094f82e71f ']' 00:15:46.904 06:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:15:46.904 06:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:46.904 06:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:46.904 06:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 88441 00:15:46.904 06:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 88441 ']' 00:15:46.904 06:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 88441 00:15:46.904 06:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:15:46.904 06:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:46.904 06:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88441 00:15:46.904 06:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:46.904 06:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:46.904 killing process with pid 88441 00:15:46.904 06:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88441' 00:15:46.904 06:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 88441 00:15:46.904 [2024-08-14 06:47:14.141716] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.904 [2024-08-14 06:47:14.141842] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.904 06:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 88441 00:15:46.904 [2024-08-14 06:47:14.141938] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.904 [2024-08-14 06:47:14.141951] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:47.163 [2024-08-14 06:47:14.188655] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.429 06:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:15:47.429 00:15:47.429 real 0m15.073s 00:15:47.429 user 0m27.390s 00:15:47.429 sys 0m2.332s 00:15:47.429 06:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:47.429 06:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.429 ************************************ 00:15:47.429 END TEST raid_superblock_test 00:15:47.429 ************************************ 00:15:47.429 06:47:14 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:15:47.429 06:47:14 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:47.429 06:47:14 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:47.429 06:47:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.429 ************************************ 00:15:47.429 START TEST raid_read_error_test 00:15:47.429 ************************************ 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 4 read 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:47.429 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:47.430 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:15:47.430 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:15:47.430 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:15:47.430 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:47.430 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.IaB6bOjZ55 00:15:47.430 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=88942 00:15:47.430 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:47.430 06:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 88942 /var/tmp/spdk-raid.sock 00:15:47.430 06:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 88942 ']' 00:15:47.430 06:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:47.430 06:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:47.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:47.430 06:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:47.430 06:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:47.430 06:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.430 [2024-08-14 06:47:14.618089] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:15:47.430 [2024-08-14 06:47:14.618226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88942 ] 00:15:47.699 [2024-08-14 06:47:14.768381] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.699 [2024-08-14 06:47:14.821028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.699 [2024-08-14 06:47:14.866051] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.699 [2024-08-14 06:47:14.866107] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.267 06:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:48.267 06:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:15:48.267 06:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:48.267 06:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:48.525 BaseBdev1_malloc 00:15:48.525 06:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:48.783 true 00:15:48.783 06:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:49.041 [2024-08-14 06:47:16.071879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:49.041 [2024-08-14 06:47:16.071966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.041 [2024-08-14 06:47:16.072000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:49.041 [2024-08-14 06:47:16.072017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.041 [2024-08-14 06:47:16.074481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.041 [2024-08-14 06:47:16.074534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:49.041 BaseBdev1 00:15:49.041 06:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:49.041 06:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:49.300 BaseBdev2_malloc 00:15:49.300 06:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:49.300 true 00:15:49.300 06:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:49.558 [2024-08-14 06:47:16.700089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:49.558 [2024-08-14 06:47:16.700195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.558 [2024-08-14 06:47:16.700223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:49.558 [2024-08-14 06:47:16.700237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.558 [2024-08-14 06:47:16.702638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.558 [2024-08-14 06:47:16.702684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:49.558 BaseBdev2 00:15:49.558 06:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:49.558 06:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:49.816 BaseBdev3_malloc 00:15:49.816 06:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:50.076 true 00:15:50.076 06:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:50.076 [2024-08-14 06:47:17.307616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:50.076 [2024-08-14 06:47:17.307710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.076 [2024-08-14 06:47:17.307738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:50.076 [2024-08-14 06:47:17.307752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.076 [2024-08-14 06:47:17.310030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.076 [2024-08-14 06:47:17.310079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:50.076 BaseBdev3 00:15:50.076 06:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:50.076 06:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:50.336 BaseBdev4_malloc 00:15:50.336 06:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:50.594 true 00:15:50.594 06:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:50.853 [2024-08-14 06:47:17.903908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:50.853 [2024-08-14 06:47:17.903990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.853 [2024-08-14 06:47:17.904017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:50.853 [2024-08-14 06:47:17.904034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.853 [2024-08-14 06:47:17.906496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.853 [2024-08-14 06:47:17.906545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:50.853 BaseBdev4 00:15:50.853 06:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:50.853 [2024-08-14 06:47:18.095668] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.853 [2024-08-14 06:47:18.097816] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.853 [2024-08-14 06:47:18.097918] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:50.853 [2024-08-14 06:47:18.097997] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:50.853 [2024-08-14 06:47:18.098273] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:15:50.853 [2024-08-14 06:47:18.098304] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:50.853 [2024-08-14 06:47:18.098647] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:50.853 [2024-08-14 06:47:18.098835] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:15:50.853 [2024-08-14 06:47:18.098862] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:15:50.853 [2024-08-14 06:47:18.099056] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.113 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:51.113 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:51.113 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:51.113 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:51.113 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:51.113 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:51.113 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.113 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.113 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.113 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.113 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.113 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.113 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.113 "name": "raid_bdev1", 00:15:51.113 "uuid": "7b6bc896-fe50-491b-9ddf-61ca1a6540dc", 00:15:51.113 "strip_size_kb": 64, 00:15:51.113 "state": "online", 00:15:51.113 "raid_level": "concat", 00:15:51.113 "superblock": true, 00:15:51.113 "num_base_bdevs": 4, 00:15:51.113 "num_base_bdevs_discovered": 4, 00:15:51.113 "num_base_bdevs_operational": 4, 00:15:51.113 "base_bdevs_list": [ 00:15:51.113 { 00:15:51.113 "name": "BaseBdev1", 00:15:51.113 "uuid": "07d0c57e-2a79-509b-aa28-6ad7e4a92200", 00:15:51.113 "is_configured": true, 00:15:51.113 "data_offset": 2048, 00:15:51.113 "data_size": 63488 00:15:51.113 }, 00:15:51.113 { 00:15:51.113 "name": "BaseBdev2", 00:15:51.113 "uuid": "9878b0e6-7197-53c0-85e9-8b4f50a4284c", 00:15:51.113 "is_configured": true, 00:15:51.113 "data_offset": 2048, 00:15:51.113 "data_size": 63488 00:15:51.113 }, 00:15:51.113 { 00:15:51.113 "name": "BaseBdev3", 00:15:51.113 "uuid": "8f94b97e-efcf-56f8-9a06-133e5efad427", 00:15:51.113 "is_configured": true, 00:15:51.113 "data_offset": 2048, 00:15:51.113 "data_size": 63488 00:15:51.113 }, 00:15:51.113 { 00:15:51.113 "name": "BaseBdev4", 00:15:51.113 "uuid": "0b7a6873-60dd-5aec-85c2-8012f9af9db8", 00:15:51.113 "is_configured": true, 00:15:51.113 "data_offset": 2048, 00:15:51.113 "data_size": 63488 00:15:51.113 } 00:15:51.113 ] 00:15:51.113 }' 00:15:51.113 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.113 06:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.682 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:51.682 06:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:51.942 [2024-08-14 06:47:18.966622] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:15:52.880 06:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:52.880 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:52.880 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:15:52.880 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:15:52.880 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:52.880 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:52.880 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:52.880 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:52.880 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:52.880 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:52.880 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:52.880 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:52.880 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:52.880 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:52.880 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.880 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.139 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:53.139 "name": "raid_bdev1", 00:15:53.139 "uuid": "7b6bc896-fe50-491b-9ddf-61ca1a6540dc", 00:15:53.139 "strip_size_kb": 64, 00:15:53.139 "state": "online", 00:15:53.139 "raid_level": "concat", 00:15:53.139 "superblock": true, 00:15:53.139 "num_base_bdevs": 4, 00:15:53.139 "num_base_bdevs_discovered": 4, 00:15:53.139 "num_base_bdevs_operational": 4, 00:15:53.139 "base_bdevs_list": [ 00:15:53.139 { 00:15:53.139 "name": "BaseBdev1", 00:15:53.139 "uuid": "07d0c57e-2a79-509b-aa28-6ad7e4a92200", 00:15:53.140 "is_configured": true, 00:15:53.140 "data_offset": 2048, 00:15:53.140 "data_size": 63488 00:15:53.140 }, 00:15:53.140 { 00:15:53.140 "name": "BaseBdev2", 00:15:53.140 "uuid": "9878b0e6-7197-53c0-85e9-8b4f50a4284c", 00:15:53.140 "is_configured": true, 00:15:53.140 "data_offset": 2048, 00:15:53.140 "data_size": 63488 00:15:53.140 }, 00:15:53.140 { 00:15:53.140 "name": "BaseBdev3", 00:15:53.140 "uuid": "8f94b97e-efcf-56f8-9a06-133e5efad427", 00:15:53.140 "is_configured": true, 00:15:53.140 "data_offset": 2048, 00:15:53.140 "data_size": 63488 00:15:53.140 }, 00:15:53.140 { 00:15:53.140 "name": "BaseBdev4", 00:15:53.140 "uuid": "0b7a6873-60dd-5aec-85c2-8012f9af9db8", 00:15:53.140 "is_configured": true, 00:15:53.140 "data_offset": 2048, 00:15:53.140 "data_size": 63488 00:15:53.140 } 00:15:53.140 ] 00:15:53.140 }' 00:15:53.140 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:53.140 06:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.709 06:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:53.969 [2024-08-14 06:47:21.099756] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.969 [2024-08-14 06:47:21.099812] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.969 [2024-08-14 06:47:21.102299] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.969 [2024-08-14 06:47:21.102368] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.969 [2024-08-14 06:47:21.102417] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.969 [2024-08-14 06:47:21.102430] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:15:53.969 0 00:15:53.969 06:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 88942 00:15:53.969 06:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 88942 ']' 00:15:53.969 06:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 88942 00:15:53.969 06:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:15:53.969 06:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:53.969 06:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88942 00:15:53.969 killing process with pid 88942 00:15:53.969 06:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:53.969 06:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:53.969 06:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88942' 00:15:53.969 06:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 88942 00:15:53.969 [2024-08-14 06:47:21.157160] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.969 06:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 88942 00:15:53.969 [2024-08-14 06:47:21.193589] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.229 06:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.IaB6bOjZ55 00:15:54.229 06:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:15:54.229 06:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:54.229 06:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:15:54.229 06:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:15:54.229 06:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:54.229 06:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:54.229 06:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:15:54.229 00:15:54.229 real 0m6.930s 00:15:54.229 user 0m11.047s 00:15:54.229 sys 0m0.990s 00:15:54.229 06:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:54.229 06:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.229 ************************************ 00:15:54.229 END TEST raid_read_error_test 00:15:54.229 ************************************ 00:15:54.489 06:47:21 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:15:54.489 06:47:21 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:54.489 06:47:21 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:54.489 06:47:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.489 ************************************ 00:15:54.489 START TEST raid_write_error_test 00:15:54.489 ************************************ 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 4 write 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.c9p1Voh44n 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=89129 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 89129 /var/tmp/spdk-raid.sock 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 89129 ']' 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:54.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:54.490 06:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.490 [2024-08-14 06:47:21.597944] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:15:54.490 [2024-08-14 06:47:21.598069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89129 ] 00:15:54.749 [2024-08-14 06:47:21.743620] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.749 [2024-08-14 06:47:21.789682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.749 [2024-08-14 06:47:21.834092] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.749 [2024-08-14 06:47:21.834141] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.323 06:47:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:55.323 06:47:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:15:55.323 06:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:55.323 06:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:55.582 BaseBdev1_malloc 00:15:55.582 06:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:55.582 true 00:15:55.842 06:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:55.842 [2024-08-14 06:47:23.011345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:55.842 [2024-08-14 06:47:23.011429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.842 [2024-08-14 06:47:23.011458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:55.842 [2024-08-14 06:47:23.011476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.842 [2024-08-14 06:47:23.013997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.842 [2024-08-14 06:47:23.014053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:55.842 BaseBdev1 00:15:55.842 06:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:55.842 06:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:56.102 BaseBdev2_malloc 00:15:56.102 06:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:56.361 true 00:15:56.361 06:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:56.629 [2024-08-14 06:47:23.627565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:56.629 [2024-08-14 06:47:23.627657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.629 [2024-08-14 06:47:23.627684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:56.629 [2024-08-14 06:47:23.627698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.629 [2024-08-14 06:47:23.629963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.629 [2024-08-14 06:47:23.630014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:56.629 BaseBdev2 00:15:56.629 06:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:56.629 06:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:56.629 BaseBdev3_malloc 00:15:56.629 06:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:56.891 true 00:15:56.891 06:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:57.149 [2024-08-14 06:47:24.227321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:57.149 [2024-08-14 06:47:24.227407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.149 [2024-08-14 06:47:24.227435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:57.149 [2024-08-14 06:47:24.227449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.149 [2024-08-14 06:47:24.229919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.149 [2024-08-14 06:47:24.229974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:57.149 BaseBdev3 00:15:57.149 06:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:57.149 06:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:57.407 BaseBdev4_malloc 00:15:57.407 06:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:57.407 true 00:15:57.407 06:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:57.665 [2024-08-14 06:47:24.855632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:57.665 [2024-08-14 06:47:24.855726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.665 [2024-08-14 06:47:24.855753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:57.665 [2024-08-14 06:47:24.855769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.665 [2024-08-14 06:47:24.857981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.665 [2024-08-14 06:47:24.858034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:57.665 BaseBdev4 00:15:57.665 06:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:57.924 [2024-08-14 06:47:25.051435] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.924 [2024-08-14 06:47:25.053617] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.924 [2024-08-14 06:47:25.053724] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.924 [2024-08-14 06:47:25.053808] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:57.924 [2024-08-14 06:47:25.054078] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:15:57.924 [2024-08-14 06:47:25.054141] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:57.924 [2024-08-14 06:47:25.054503] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:57.924 [2024-08-14 06:47:25.054699] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:15:57.924 [2024-08-14 06:47:25.054721] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:15:57.924 [2024-08-14 06:47:25.054920] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.924 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:57.924 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:57.924 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:57.924 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:57.924 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:57.924 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:57.924 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:57.924 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:57.924 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:57.924 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:57.924 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.924 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.183 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:58.183 "name": "raid_bdev1", 00:15:58.183 "uuid": "e87e1f06-2dfb-45bb-8dee-463f2ce83ba8", 00:15:58.183 "strip_size_kb": 64, 00:15:58.183 "state": "online", 00:15:58.183 "raid_level": "concat", 00:15:58.183 "superblock": true, 00:15:58.183 "num_base_bdevs": 4, 00:15:58.183 "num_base_bdevs_discovered": 4, 00:15:58.183 "num_base_bdevs_operational": 4, 00:15:58.183 "base_bdevs_list": [ 00:15:58.183 { 00:15:58.183 "name": "BaseBdev1", 00:15:58.183 "uuid": "de903afc-132d-558c-b295-0d18b3746ad1", 00:15:58.183 "is_configured": true, 00:15:58.183 "data_offset": 2048, 00:15:58.183 "data_size": 63488 00:15:58.183 }, 00:15:58.183 { 00:15:58.183 "name": "BaseBdev2", 00:15:58.183 "uuid": "809bc7ea-cf5d-592b-8f76-3a29cec93691", 00:15:58.183 "is_configured": true, 00:15:58.183 "data_offset": 2048, 00:15:58.183 "data_size": 63488 00:15:58.183 }, 00:15:58.183 { 00:15:58.183 "name": "BaseBdev3", 00:15:58.183 "uuid": "246d0125-1544-5afe-b2e4-33a471c655b8", 00:15:58.183 "is_configured": true, 00:15:58.183 "data_offset": 2048, 00:15:58.183 "data_size": 63488 00:15:58.183 }, 00:15:58.183 { 00:15:58.183 "name": "BaseBdev4", 00:15:58.183 "uuid": "dff85803-5cbb-5268-b50d-9e68883c31eb", 00:15:58.183 "is_configured": true, 00:15:58.183 "data_offset": 2048, 00:15:58.183 "data_size": 63488 00:15:58.183 } 00:15:58.183 ] 00:15:58.183 }' 00:15:58.183 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:58.183 06:47:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.750 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:58.750 06:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:58.750 [2024-08-14 06:47:25.966328] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:15:59.687 06:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:59.946 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:59.946 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:15:59.946 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:15:59.946 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:59.946 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:59.946 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:59.946 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:59.946 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:59.946 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:59.946 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.946 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.946 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.946 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.946 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.946 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.206 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:00.206 "name": "raid_bdev1", 00:16:00.206 "uuid": "e87e1f06-2dfb-45bb-8dee-463f2ce83ba8", 00:16:00.206 "strip_size_kb": 64, 00:16:00.206 "state": "online", 00:16:00.206 "raid_level": "concat", 00:16:00.206 "superblock": true, 00:16:00.206 "num_base_bdevs": 4, 00:16:00.206 "num_base_bdevs_discovered": 4, 00:16:00.206 "num_base_bdevs_operational": 4, 00:16:00.206 "base_bdevs_list": [ 00:16:00.206 { 00:16:00.206 "name": "BaseBdev1", 00:16:00.206 "uuid": "de903afc-132d-558c-b295-0d18b3746ad1", 00:16:00.206 "is_configured": true, 00:16:00.206 "data_offset": 2048, 00:16:00.206 "data_size": 63488 00:16:00.206 }, 00:16:00.206 { 00:16:00.206 "name": "BaseBdev2", 00:16:00.206 "uuid": "809bc7ea-cf5d-592b-8f76-3a29cec93691", 00:16:00.206 "is_configured": true, 00:16:00.206 "data_offset": 2048, 00:16:00.206 "data_size": 63488 00:16:00.206 }, 00:16:00.206 { 00:16:00.206 "name": "BaseBdev3", 00:16:00.206 "uuid": "246d0125-1544-5afe-b2e4-33a471c655b8", 00:16:00.206 "is_configured": true, 00:16:00.206 "data_offset": 2048, 00:16:00.206 "data_size": 63488 00:16:00.206 }, 00:16:00.206 { 00:16:00.206 "name": "BaseBdev4", 00:16:00.206 "uuid": "dff85803-5cbb-5268-b50d-9e68883c31eb", 00:16:00.206 "is_configured": true, 00:16:00.206 "data_offset": 2048, 00:16:00.206 "data_size": 63488 00:16:00.206 } 00:16:00.206 ] 00:16:00.206 }' 00:16:00.206 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:00.206 06:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.773 06:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:01.032 [2024-08-14 06:47:28.099529] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.032 [2024-08-14 06:47:28.099586] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.032 [2024-08-14 06:47:28.102145] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.032 [2024-08-14 06:47:28.102237] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.032 [2024-08-14 06:47:28.102288] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.032 [2024-08-14 06:47:28.102309] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:16:01.032 0 00:16:01.032 06:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 89129 00:16:01.032 06:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 89129 ']' 00:16:01.032 06:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 89129 00:16:01.032 06:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:16:01.032 06:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:01.032 06:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89129 00:16:01.032 killing process with pid 89129 00:16:01.032 06:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:01.032 06:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:01.032 06:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89129' 00:16:01.032 06:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 89129 00:16:01.032 [2024-08-14 06:47:28.170965] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.032 06:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 89129 00:16:01.032 [2024-08-14 06:47:28.209456] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.292 06:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.c9p1Voh44n 00:16:01.292 06:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:01.292 06:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:01.292 06:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:16:01.292 06:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:16:01.292 06:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:01.292 06:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:01.292 06:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:16:01.292 00:16:01.292 real 0m6.953s 00:16:01.292 user 0m11.114s 00:16:01.292 sys 0m0.965s 00:16:01.292 06:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:01.292 06:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.292 ************************************ 00:16:01.292 END TEST raid_write_error_test 00:16:01.292 ************************************ 00:16:01.292 06:47:28 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:16:01.292 06:47:28 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:16:01.292 06:47:28 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:01.292 06:47:28 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:01.292 06:47:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.292 ************************************ 00:16:01.292 START TEST raid_state_function_test 00:16:01.292 ************************************ 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 false 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:01.292 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=89311 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:01.552 Process raid pid: 89311 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 89311' 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 89311 /var/tmp/spdk-raid.sock 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 89311 ']' 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:01.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:01.552 06:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.552 [2024-08-14 06:47:28.627168] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:16:01.552 [2024-08-14 06:47:28.627323] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.552 [2024-08-14 06:47:28.776634] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.811 [2024-08-14 06:47:28.828840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.811 [2024-08-14 06:47:28.873069] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.811 [2024-08-14 06:47:28.873115] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.402 06:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:02.402 06:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:16:02.402 06:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:02.662 [2024-08-14 06:47:29.658090] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:02.662 [2024-08-14 06:47:29.658182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:02.662 [2024-08-14 06:47:29.658200] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.662 [2024-08-14 06:47:29.658211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.662 [2024-08-14 06:47:29.658224] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.662 [2024-08-14 06:47:29.658233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.662 [2024-08-14 06:47:29.658246] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:02.662 [2024-08-14 06:47:29.658255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:02.662 06:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:02.662 06:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:02.662 06:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:02.662 06:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:02.662 06:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:02.662 06:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:02.662 06:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.662 06:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.662 06:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.662 06:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.662 06:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.662 06:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.662 06:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.662 "name": "Existed_Raid", 00:16:02.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.662 "strip_size_kb": 0, 00:16:02.662 "state": "configuring", 00:16:02.662 "raid_level": "raid1", 00:16:02.662 "superblock": false, 00:16:02.662 "num_base_bdevs": 4, 00:16:02.662 "num_base_bdevs_discovered": 0, 00:16:02.662 "num_base_bdevs_operational": 4, 00:16:02.662 "base_bdevs_list": [ 00:16:02.662 { 00:16:02.662 "name": "BaseBdev1", 00:16:02.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.662 "is_configured": false, 00:16:02.662 "data_offset": 0, 00:16:02.662 "data_size": 0 00:16:02.662 }, 00:16:02.662 { 00:16:02.662 "name": "BaseBdev2", 00:16:02.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.662 "is_configured": false, 00:16:02.662 "data_offset": 0, 00:16:02.662 "data_size": 0 00:16:02.662 }, 00:16:02.662 { 00:16:02.662 "name": "BaseBdev3", 00:16:02.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.662 "is_configured": false, 00:16:02.662 "data_offset": 0, 00:16:02.662 "data_size": 0 00:16:02.662 }, 00:16:02.662 { 00:16:02.662 "name": "BaseBdev4", 00:16:02.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.662 "is_configured": false, 00:16:02.662 "data_offset": 0, 00:16:02.662 "data_size": 0 00:16:02.662 } 00:16:02.662 ] 00:16:02.662 }' 00:16:02.662 06:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.662 06:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.229 06:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:03.489 [2024-08-14 06:47:30.636454] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:03.489 [2024-08-14 06:47:30.636505] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:16:03.489 06:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:03.748 [2024-08-14 06:47:30.840140] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:03.748 [2024-08-14 06:47:30.840218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:03.748 [2024-08-14 06:47:30.840234] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.748 [2024-08-14 06:47:30.840243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.748 [2024-08-14 06:47:30.840254] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:03.748 [2024-08-14 06:47:30.840263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:03.748 [2024-08-14 06:47:30.840273] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:03.748 [2024-08-14 06:47:30.840281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:03.748 06:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:04.008 [2024-08-14 06:47:31.037547] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.008 BaseBdev1 00:16:04.008 06:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:04.008 06:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:04.008 06:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:04.008 06:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:04.008 06:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:04.008 06:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:04.008 06:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:04.267 06:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:04.267 [ 00:16:04.267 { 00:16:04.267 "name": "BaseBdev1", 00:16:04.267 "aliases": [ 00:16:04.267 "a0edb39b-24c2-41ea-b859-fe51298a57ac" 00:16:04.267 ], 00:16:04.267 "product_name": "Malloc disk", 00:16:04.267 "block_size": 512, 00:16:04.267 "num_blocks": 65536, 00:16:04.267 "uuid": "a0edb39b-24c2-41ea-b859-fe51298a57ac", 00:16:04.267 "assigned_rate_limits": { 00:16:04.267 "rw_ios_per_sec": 0, 00:16:04.267 "rw_mbytes_per_sec": 0, 00:16:04.267 "r_mbytes_per_sec": 0, 00:16:04.267 "w_mbytes_per_sec": 0 00:16:04.267 }, 00:16:04.267 "claimed": true, 00:16:04.267 "claim_type": "exclusive_write", 00:16:04.267 "zoned": false, 00:16:04.267 "supported_io_types": { 00:16:04.267 "read": true, 00:16:04.267 "write": true, 00:16:04.267 "unmap": true, 00:16:04.267 "flush": true, 00:16:04.267 "reset": true, 00:16:04.267 "nvme_admin": false, 00:16:04.267 "nvme_io": false, 00:16:04.267 "nvme_io_md": false, 00:16:04.267 "write_zeroes": true, 00:16:04.267 "zcopy": true, 00:16:04.267 "get_zone_info": false, 00:16:04.267 "zone_management": false, 00:16:04.267 "zone_append": false, 00:16:04.267 "compare": false, 00:16:04.267 "compare_and_write": false, 00:16:04.267 "abort": true, 00:16:04.267 "seek_hole": false, 00:16:04.267 "seek_data": false, 00:16:04.267 "copy": true, 00:16:04.267 "nvme_iov_md": false 00:16:04.267 }, 00:16:04.267 "memory_domains": [ 00:16:04.267 { 00:16:04.267 "dma_device_id": "system", 00:16:04.267 "dma_device_type": 1 00:16:04.267 }, 00:16:04.267 { 00:16:04.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.267 "dma_device_type": 2 00:16:04.267 } 00:16:04.267 ], 00:16:04.267 "driver_specific": {} 00:16:04.267 } 00:16:04.267 ] 00:16:04.267 06:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:04.267 06:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:04.267 06:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:04.267 06:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:04.267 06:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:04.267 06:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:04.267 06:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:04.267 06:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.267 06:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.267 06:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.267 06:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.268 06:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.268 06:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.527 06:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.527 "name": "Existed_Raid", 00:16:04.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.527 "strip_size_kb": 0, 00:16:04.527 "state": "configuring", 00:16:04.527 "raid_level": "raid1", 00:16:04.527 "superblock": false, 00:16:04.527 "num_base_bdevs": 4, 00:16:04.527 "num_base_bdevs_discovered": 1, 00:16:04.527 "num_base_bdevs_operational": 4, 00:16:04.527 "base_bdevs_list": [ 00:16:04.527 { 00:16:04.527 "name": "BaseBdev1", 00:16:04.527 "uuid": "a0edb39b-24c2-41ea-b859-fe51298a57ac", 00:16:04.527 "is_configured": true, 00:16:04.527 "data_offset": 0, 00:16:04.527 "data_size": 65536 00:16:04.527 }, 00:16:04.527 { 00:16:04.527 "name": "BaseBdev2", 00:16:04.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.527 "is_configured": false, 00:16:04.527 "data_offset": 0, 00:16:04.527 "data_size": 0 00:16:04.527 }, 00:16:04.527 { 00:16:04.527 "name": "BaseBdev3", 00:16:04.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.527 "is_configured": false, 00:16:04.527 "data_offset": 0, 00:16:04.527 "data_size": 0 00:16:04.527 }, 00:16:04.527 { 00:16:04.527 "name": "BaseBdev4", 00:16:04.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.527 "is_configured": false, 00:16:04.527 "data_offset": 0, 00:16:04.527 "data_size": 0 00:16:04.527 } 00:16:04.527 ] 00:16:04.527 }' 00:16:04.527 06:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.527 06:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.096 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:05.356 [2024-08-14 06:47:32.423306] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.356 [2024-08-14 06:47:32.423388] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:16:05.356 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:05.616 [2024-08-14 06:47:32.627035] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.616 [2024-08-14 06:47:32.629014] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.616 [2024-08-14 06:47:32.629062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.616 [2024-08-14 06:47:32.629076] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:05.616 [2024-08-14 06:47:32.629086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:05.616 [2024-08-14 06:47:32.629100] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:05.616 [2024-08-14 06:47:32.629109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:05.616 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:05.616 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:05.616 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:05.616 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:05.616 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:05.616 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:05.616 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:05.616 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:05.616 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.616 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.616 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.616 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.616 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.616 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.616 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:05.616 "name": "Existed_Raid", 00:16:05.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.616 "strip_size_kb": 0, 00:16:05.616 "state": "configuring", 00:16:05.616 "raid_level": "raid1", 00:16:05.616 "superblock": false, 00:16:05.616 "num_base_bdevs": 4, 00:16:05.616 "num_base_bdevs_discovered": 1, 00:16:05.616 "num_base_bdevs_operational": 4, 00:16:05.616 "base_bdevs_list": [ 00:16:05.616 { 00:16:05.616 "name": "BaseBdev1", 00:16:05.616 "uuid": "a0edb39b-24c2-41ea-b859-fe51298a57ac", 00:16:05.616 "is_configured": true, 00:16:05.616 "data_offset": 0, 00:16:05.616 "data_size": 65536 00:16:05.616 }, 00:16:05.616 { 00:16:05.616 "name": "BaseBdev2", 00:16:05.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.616 "is_configured": false, 00:16:05.616 "data_offset": 0, 00:16:05.616 "data_size": 0 00:16:05.616 }, 00:16:05.616 { 00:16:05.616 "name": "BaseBdev3", 00:16:05.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.616 "is_configured": false, 00:16:05.616 "data_offset": 0, 00:16:05.616 "data_size": 0 00:16:05.616 }, 00:16:05.616 { 00:16:05.616 "name": "BaseBdev4", 00:16:05.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.616 "is_configured": false, 00:16:05.616 "data_offset": 0, 00:16:05.617 "data_size": 0 00:16:05.617 } 00:16:05.617 ] 00:16:05.617 }' 00:16:05.617 06:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:05.617 06:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.185 06:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:06.444 [2024-08-14 06:47:33.627658] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.444 BaseBdev2 00:16:06.444 06:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:06.444 06:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:06.444 06:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:06.444 06:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:06.444 06:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:06.444 06:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:06.444 06:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.703 06:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:06.961 [ 00:16:06.961 { 00:16:06.961 "name": "BaseBdev2", 00:16:06.961 "aliases": [ 00:16:06.961 "b5a24fa1-9cac-46d8-b5fc-e898e705fa24" 00:16:06.961 ], 00:16:06.961 "product_name": "Malloc disk", 00:16:06.961 "block_size": 512, 00:16:06.961 "num_blocks": 65536, 00:16:06.961 "uuid": "b5a24fa1-9cac-46d8-b5fc-e898e705fa24", 00:16:06.962 "assigned_rate_limits": { 00:16:06.962 "rw_ios_per_sec": 0, 00:16:06.962 "rw_mbytes_per_sec": 0, 00:16:06.962 "r_mbytes_per_sec": 0, 00:16:06.962 "w_mbytes_per_sec": 0 00:16:06.962 }, 00:16:06.962 "claimed": true, 00:16:06.962 "claim_type": "exclusive_write", 00:16:06.962 "zoned": false, 00:16:06.962 "supported_io_types": { 00:16:06.962 "read": true, 00:16:06.962 "write": true, 00:16:06.962 "unmap": true, 00:16:06.962 "flush": true, 00:16:06.962 "reset": true, 00:16:06.962 "nvme_admin": false, 00:16:06.962 "nvme_io": false, 00:16:06.962 "nvme_io_md": false, 00:16:06.962 "write_zeroes": true, 00:16:06.962 "zcopy": true, 00:16:06.962 "get_zone_info": false, 00:16:06.962 "zone_management": false, 00:16:06.962 "zone_append": false, 00:16:06.962 "compare": false, 00:16:06.962 "compare_and_write": false, 00:16:06.962 "abort": true, 00:16:06.962 "seek_hole": false, 00:16:06.962 "seek_data": false, 00:16:06.962 "copy": true, 00:16:06.962 "nvme_iov_md": false 00:16:06.962 }, 00:16:06.962 "memory_domains": [ 00:16:06.962 { 00:16:06.962 "dma_device_id": "system", 00:16:06.962 "dma_device_type": 1 00:16:06.962 }, 00:16:06.962 { 00:16:06.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.962 "dma_device_type": 2 00:16:06.962 } 00:16:06.962 ], 00:16:06.962 "driver_specific": {} 00:16:06.962 } 00:16:06.962 ] 00:16:06.962 06:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:06.962 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:06.962 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:06.962 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:06.962 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:06.962 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:06.962 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:06.962 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:06.962 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:06.962 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:06.962 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:06.962 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:06.962 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:06.962 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.962 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.221 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.221 "name": "Existed_Raid", 00:16:07.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.221 "strip_size_kb": 0, 00:16:07.221 "state": "configuring", 00:16:07.221 "raid_level": "raid1", 00:16:07.221 "superblock": false, 00:16:07.221 "num_base_bdevs": 4, 00:16:07.222 "num_base_bdevs_discovered": 2, 00:16:07.222 "num_base_bdevs_operational": 4, 00:16:07.222 "base_bdevs_list": [ 00:16:07.222 { 00:16:07.222 "name": "BaseBdev1", 00:16:07.222 "uuid": "a0edb39b-24c2-41ea-b859-fe51298a57ac", 00:16:07.222 "is_configured": true, 00:16:07.222 "data_offset": 0, 00:16:07.222 "data_size": 65536 00:16:07.222 }, 00:16:07.222 { 00:16:07.222 "name": "BaseBdev2", 00:16:07.222 "uuid": "b5a24fa1-9cac-46d8-b5fc-e898e705fa24", 00:16:07.222 "is_configured": true, 00:16:07.222 "data_offset": 0, 00:16:07.222 "data_size": 65536 00:16:07.222 }, 00:16:07.222 { 00:16:07.222 "name": "BaseBdev3", 00:16:07.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.222 "is_configured": false, 00:16:07.222 "data_offset": 0, 00:16:07.222 "data_size": 0 00:16:07.222 }, 00:16:07.222 { 00:16:07.222 "name": "BaseBdev4", 00:16:07.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.222 "is_configured": false, 00:16:07.222 "data_offset": 0, 00:16:07.222 "data_size": 0 00:16:07.222 } 00:16:07.222 ] 00:16:07.222 }' 00:16:07.222 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.222 06:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.788 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:07.788 [2024-08-14 06:47:34.986512] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.788 BaseBdev3 00:16:07.788 06:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:07.788 06:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:07.788 06:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:07.788 06:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:07.788 06:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:07.788 06:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:07.788 06:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.046 06:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:08.303 [ 00:16:08.303 { 00:16:08.303 "name": "BaseBdev3", 00:16:08.303 "aliases": [ 00:16:08.303 "54cf9cde-8cf8-44b6-a98e-68a1cb105668" 00:16:08.303 ], 00:16:08.303 "product_name": "Malloc disk", 00:16:08.303 "block_size": 512, 00:16:08.303 "num_blocks": 65536, 00:16:08.303 "uuid": "54cf9cde-8cf8-44b6-a98e-68a1cb105668", 00:16:08.303 "assigned_rate_limits": { 00:16:08.303 "rw_ios_per_sec": 0, 00:16:08.304 "rw_mbytes_per_sec": 0, 00:16:08.304 "r_mbytes_per_sec": 0, 00:16:08.304 "w_mbytes_per_sec": 0 00:16:08.304 }, 00:16:08.304 "claimed": true, 00:16:08.304 "claim_type": "exclusive_write", 00:16:08.304 "zoned": false, 00:16:08.304 "supported_io_types": { 00:16:08.304 "read": true, 00:16:08.304 "write": true, 00:16:08.304 "unmap": true, 00:16:08.304 "flush": true, 00:16:08.304 "reset": true, 00:16:08.304 "nvme_admin": false, 00:16:08.304 "nvme_io": false, 00:16:08.304 "nvme_io_md": false, 00:16:08.304 "write_zeroes": true, 00:16:08.304 "zcopy": true, 00:16:08.304 "get_zone_info": false, 00:16:08.304 "zone_management": false, 00:16:08.304 "zone_append": false, 00:16:08.304 "compare": false, 00:16:08.304 "compare_and_write": false, 00:16:08.304 "abort": true, 00:16:08.304 "seek_hole": false, 00:16:08.304 "seek_data": false, 00:16:08.304 "copy": true, 00:16:08.304 "nvme_iov_md": false 00:16:08.304 }, 00:16:08.304 "memory_domains": [ 00:16:08.304 { 00:16:08.304 "dma_device_id": "system", 00:16:08.304 "dma_device_type": 1 00:16:08.304 }, 00:16:08.304 { 00:16:08.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.304 "dma_device_type": 2 00:16:08.304 } 00:16:08.304 ], 00:16:08.304 "driver_specific": {} 00:16:08.304 } 00:16:08.304 ] 00:16:08.304 06:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:08.304 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:08.304 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:08.304 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:08.304 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:08.304 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:08.304 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:08.304 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:08.304 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:08.304 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.304 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.304 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.304 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.304 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.304 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.563 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:08.563 "name": "Existed_Raid", 00:16:08.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.563 "strip_size_kb": 0, 00:16:08.563 "state": "configuring", 00:16:08.563 "raid_level": "raid1", 00:16:08.563 "superblock": false, 00:16:08.563 "num_base_bdevs": 4, 00:16:08.563 "num_base_bdevs_discovered": 3, 00:16:08.563 "num_base_bdevs_operational": 4, 00:16:08.563 "base_bdevs_list": [ 00:16:08.563 { 00:16:08.563 "name": "BaseBdev1", 00:16:08.563 "uuid": "a0edb39b-24c2-41ea-b859-fe51298a57ac", 00:16:08.563 "is_configured": true, 00:16:08.563 "data_offset": 0, 00:16:08.563 "data_size": 65536 00:16:08.563 }, 00:16:08.563 { 00:16:08.563 "name": "BaseBdev2", 00:16:08.563 "uuid": "b5a24fa1-9cac-46d8-b5fc-e898e705fa24", 00:16:08.563 "is_configured": true, 00:16:08.563 "data_offset": 0, 00:16:08.563 "data_size": 65536 00:16:08.563 }, 00:16:08.563 { 00:16:08.563 "name": "BaseBdev3", 00:16:08.563 "uuid": "54cf9cde-8cf8-44b6-a98e-68a1cb105668", 00:16:08.563 "is_configured": true, 00:16:08.563 "data_offset": 0, 00:16:08.563 "data_size": 65536 00:16:08.563 }, 00:16:08.563 { 00:16:08.563 "name": "BaseBdev4", 00:16:08.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.563 "is_configured": false, 00:16:08.563 "data_offset": 0, 00:16:08.563 "data_size": 0 00:16:08.563 } 00:16:08.563 ] 00:16:08.563 }' 00:16:08.563 06:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:08.563 06:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.130 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:09.130 [2024-08-14 06:47:36.365533] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:09.130 [2024-08-14 06:47:36.365652] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:09.130 [2024-08-14 06:47:36.365672] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:09.130 [2024-08-14 06:47:36.365988] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:09.130 [2024-08-14 06:47:36.366229] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:09.130 [2024-08-14 06:47:36.366246] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:16:09.130 [2024-08-14 06:47:36.366495] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.130 BaseBdev4 00:16:09.389 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:09.389 06:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:16:09.389 06:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:09.389 06:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:09.389 06:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:09.389 06:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:09.389 06:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:09.389 06:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:09.648 [ 00:16:09.648 { 00:16:09.648 "name": "BaseBdev4", 00:16:09.648 "aliases": [ 00:16:09.648 "85603806-d76c-481a-9cbe-6da95b124a9b" 00:16:09.648 ], 00:16:09.648 "product_name": "Malloc disk", 00:16:09.648 "block_size": 512, 00:16:09.648 "num_blocks": 65536, 00:16:09.648 "uuid": "85603806-d76c-481a-9cbe-6da95b124a9b", 00:16:09.648 "assigned_rate_limits": { 00:16:09.648 "rw_ios_per_sec": 0, 00:16:09.648 "rw_mbytes_per_sec": 0, 00:16:09.648 "r_mbytes_per_sec": 0, 00:16:09.648 "w_mbytes_per_sec": 0 00:16:09.648 }, 00:16:09.648 "claimed": true, 00:16:09.648 "claim_type": "exclusive_write", 00:16:09.648 "zoned": false, 00:16:09.648 "supported_io_types": { 00:16:09.648 "read": true, 00:16:09.648 "write": true, 00:16:09.648 "unmap": true, 00:16:09.648 "flush": true, 00:16:09.648 "reset": true, 00:16:09.648 "nvme_admin": false, 00:16:09.648 "nvme_io": false, 00:16:09.648 "nvme_io_md": false, 00:16:09.648 "write_zeroes": true, 00:16:09.648 "zcopy": true, 00:16:09.648 "get_zone_info": false, 00:16:09.648 "zone_management": false, 00:16:09.648 "zone_append": false, 00:16:09.648 "compare": false, 00:16:09.648 "compare_and_write": false, 00:16:09.648 "abort": true, 00:16:09.648 "seek_hole": false, 00:16:09.648 "seek_data": false, 00:16:09.648 "copy": true, 00:16:09.648 "nvme_iov_md": false 00:16:09.648 }, 00:16:09.648 "memory_domains": [ 00:16:09.648 { 00:16:09.648 "dma_device_id": "system", 00:16:09.648 "dma_device_type": 1 00:16:09.648 }, 00:16:09.648 { 00:16:09.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.648 "dma_device_type": 2 00:16:09.648 } 00:16:09.648 ], 00:16:09.648 "driver_specific": {} 00:16:09.648 } 00:16:09.648 ] 00:16:09.648 06:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:09.648 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:09.648 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:09.648 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:09.648 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:09.648 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:09.648 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:09.648 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:09.648 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:09.648 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.648 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.648 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.648 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.648 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.648 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.914 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.914 "name": "Existed_Raid", 00:16:09.914 "uuid": "b8421b9a-1f80-44e5-a3bb-5409b02c2a15", 00:16:09.914 "strip_size_kb": 0, 00:16:09.914 "state": "online", 00:16:09.914 "raid_level": "raid1", 00:16:09.914 "superblock": false, 00:16:09.914 "num_base_bdevs": 4, 00:16:09.914 "num_base_bdevs_discovered": 4, 00:16:09.914 "num_base_bdevs_operational": 4, 00:16:09.914 "base_bdevs_list": [ 00:16:09.914 { 00:16:09.914 "name": "BaseBdev1", 00:16:09.914 "uuid": "a0edb39b-24c2-41ea-b859-fe51298a57ac", 00:16:09.914 "is_configured": true, 00:16:09.914 "data_offset": 0, 00:16:09.914 "data_size": 65536 00:16:09.914 }, 00:16:09.914 { 00:16:09.914 "name": "BaseBdev2", 00:16:09.914 "uuid": "b5a24fa1-9cac-46d8-b5fc-e898e705fa24", 00:16:09.914 "is_configured": true, 00:16:09.914 "data_offset": 0, 00:16:09.914 "data_size": 65536 00:16:09.914 }, 00:16:09.914 { 00:16:09.914 "name": "BaseBdev3", 00:16:09.914 "uuid": "54cf9cde-8cf8-44b6-a98e-68a1cb105668", 00:16:09.914 "is_configured": true, 00:16:09.914 "data_offset": 0, 00:16:09.915 "data_size": 65536 00:16:09.915 }, 00:16:09.915 { 00:16:09.915 "name": "BaseBdev4", 00:16:09.915 "uuid": "85603806-d76c-481a-9cbe-6da95b124a9b", 00:16:09.915 "is_configured": true, 00:16:09.915 "data_offset": 0, 00:16:09.915 "data_size": 65536 00:16:09.915 } 00:16:09.915 ] 00:16:09.915 }' 00:16:09.915 06:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.915 06:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.500 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:10.500 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:10.500 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:10.500 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:10.500 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:10.500 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:10.500 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:10.500 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:10.500 [2024-08-14 06:47:37.704311] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.500 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:10.500 "name": "Existed_Raid", 00:16:10.500 "aliases": [ 00:16:10.500 "b8421b9a-1f80-44e5-a3bb-5409b02c2a15" 00:16:10.500 ], 00:16:10.500 "product_name": "Raid Volume", 00:16:10.500 "block_size": 512, 00:16:10.500 "num_blocks": 65536, 00:16:10.500 "uuid": "b8421b9a-1f80-44e5-a3bb-5409b02c2a15", 00:16:10.500 "assigned_rate_limits": { 00:16:10.500 "rw_ios_per_sec": 0, 00:16:10.500 "rw_mbytes_per_sec": 0, 00:16:10.500 "r_mbytes_per_sec": 0, 00:16:10.500 "w_mbytes_per_sec": 0 00:16:10.500 }, 00:16:10.500 "claimed": false, 00:16:10.500 "zoned": false, 00:16:10.500 "supported_io_types": { 00:16:10.500 "read": true, 00:16:10.500 "write": true, 00:16:10.500 "unmap": false, 00:16:10.500 "flush": false, 00:16:10.500 "reset": true, 00:16:10.500 "nvme_admin": false, 00:16:10.500 "nvme_io": false, 00:16:10.500 "nvme_io_md": false, 00:16:10.500 "write_zeroes": true, 00:16:10.500 "zcopy": false, 00:16:10.500 "get_zone_info": false, 00:16:10.500 "zone_management": false, 00:16:10.500 "zone_append": false, 00:16:10.500 "compare": false, 00:16:10.500 "compare_and_write": false, 00:16:10.500 "abort": false, 00:16:10.500 "seek_hole": false, 00:16:10.500 "seek_data": false, 00:16:10.500 "copy": false, 00:16:10.500 "nvme_iov_md": false 00:16:10.500 }, 00:16:10.500 "memory_domains": [ 00:16:10.500 { 00:16:10.500 "dma_device_id": "system", 00:16:10.500 "dma_device_type": 1 00:16:10.500 }, 00:16:10.500 { 00:16:10.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.500 "dma_device_type": 2 00:16:10.500 }, 00:16:10.500 { 00:16:10.500 "dma_device_id": "system", 00:16:10.500 "dma_device_type": 1 00:16:10.500 }, 00:16:10.500 { 00:16:10.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.500 "dma_device_type": 2 00:16:10.500 }, 00:16:10.500 { 00:16:10.500 "dma_device_id": "system", 00:16:10.500 "dma_device_type": 1 00:16:10.500 }, 00:16:10.500 { 00:16:10.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.500 "dma_device_type": 2 00:16:10.500 }, 00:16:10.500 { 00:16:10.500 "dma_device_id": "system", 00:16:10.500 "dma_device_type": 1 00:16:10.500 }, 00:16:10.500 { 00:16:10.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.500 "dma_device_type": 2 00:16:10.500 } 00:16:10.500 ], 00:16:10.500 "driver_specific": { 00:16:10.500 "raid": { 00:16:10.500 "uuid": "b8421b9a-1f80-44e5-a3bb-5409b02c2a15", 00:16:10.500 "strip_size_kb": 0, 00:16:10.500 "state": "online", 00:16:10.500 "raid_level": "raid1", 00:16:10.500 "superblock": false, 00:16:10.500 "num_base_bdevs": 4, 00:16:10.500 "num_base_bdevs_discovered": 4, 00:16:10.500 "num_base_bdevs_operational": 4, 00:16:10.500 "base_bdevs_list": [ 00:16:10.500 { 00:16:10.500 "name": "BaseBdev1", 00:16:10.500 "uuid": "a0edb39b-24c2-41ea-b859-fe51298a57ac", 00:16:10.500 "is_configured": true, 00:16:10.500 "data_offset": 0, 00:16:10.500 "data_size": 65536 00:16:10.500 }, 00:16:10.500 { 00:16:10.500 "name": "BaseBdev2", 00:16:10.500 "uuid": "b5a24fa1-9cac-46d8-b5fc-e898e705fa24", 00:16:10.500 "is_configured": true, 00:16:10.500 "data_offset": 0, 00:16:10.500 "data_size": 65536 00:16:10.500 }, 00:16:10.500 { 00:16:10.500 "name": "BaseBdev3", 00:16:10.500 "uuid": "54cf9cde-8cf8-44b6-a98e-68a1cb105668", 00:16:10.500 "is_configured": true, 00:16:10.500 "data_offset": 0, 00:16:10.500 "data_size": 65536 00:16:10.500 }, 00:16:10.500 { 00:16:10.500 "name": "BaseBdev4", 00:16:10.500 "uuid": "85603806-d76c-481a-9cbe-6da95b124a9b", 00:16:10.500 "is_configured": true, 00:16:10.500 "data_offset": 0, 00:16:10.500 "data_size": 65536 00:16:10.500 } 00:16:10.500 ] 00:16:10.500 } 00:16:10.500 } 00:16:10.500 }' 00:16:10.500 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:10.760 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:10.760 BaseBdev2 00:16:10.760 BaseBdev3 00:16:10.760 BaseBdev4' 00:16:10.760 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:10.760 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:10.760 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:10.760 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:10.760 "name": "BaseBdev1", 00:16:10.760 "aliases": [ 00:16:10.760 "a0edb39b-24c2-41ea-b859-fe51298a57ac" 00:16:10.760 ], 00:16:10.760 "product_name": "Malloc disk", 00:16:10.760 "block_size": 512, 00:16:10.760 "num_blocks": 65536, 00:16:10.760 "uuid": "a0edb39b-24c2-41ea-b859-fe51298a57ac", 00:16:10.760 "assigned_rate_limits": { 00:16:10.760 "rw_ios_per_sec": 0, 00:16:10.760 "rw_mbytes_per_sec": 0, 00:16:10.760 "r_mbytes_per_sec": 0, 00:16:10.760 "w_mbytes_per_sec": 0 00:16:10.760 }, 00:16:10.760 "claimed": true, 00:16:10.760 "claim_type": "exclusive_write", 00:16:10.760 "zoned": false, 00:16:10.760 "supported_io_types": { 00:16:10.760 "read": true, 00:16:10.760 "write": true, 00:16:10.760 "unmap": true, 00:16:10.760 "flush": true, 00:16:10.760 "reset": true, 00:16:10.760 "nvme_admin": false, 00:16:10.760 "nvme_io": false, 00:16:10.760 "nvme_io_md": false, 00:16:10.760 "write_zeroes": true, 00:16:10.760 "zcopy": true, 00:16:10.760 "get_zone_info": false, 00:16:10.760 "zone_management": false, 00:16:10.760 "zone_append": false, 00:16:10.760 "compare": false, 00:16:10.760 "compare_and_write": false, 00:16:10.760 "abort": true, 00:16:10.760 "seek_hole": false, 00:16:10.760 "seek_data": false, 00:16:10.760 "copy": true, 00:16:10.760 "nvme_iov_md": false 00:16:10.760 }, 00:16:10.760 "memory_domains": [ 00:16:10.760 { 00:16:10.760 "dma_device_id": "system", 00:16:10.760 "dma_device_type": 1 00:16:10.760 }, 00:16:10.760 { 00:16:10.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.760 "dma_device_type": 2 00:16:10.760 } 00:16:10.760 ], 00:16:10.760 "driver_specific": {} 00:16:10.760 }' 00:16:10.760 06:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.019 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.019 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:11.019 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.019 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.019 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:11.019 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.019 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.019 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:11.019 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.019 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.277 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:11.277 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:11.277 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:11.277 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:11.535 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:11.535 "name": "BaseBdev2", 00:16:11.535 "aliases": [ 00:16:11.535 "b5a24fa1-9cac-46d8-b5fc-e898e705fa24" 00:16:11.535 ], 00:16:11.535 "product_name": "Malloc disk", 00:16:11.535 "block_size": 512, 00:16:11.535 "num_blocks": 65536, 00:16:11.535 "uuid": "b5a24fa1-9cac-46d8-b5fc-e898e705fa24", 00:16:11.535 "assigned_rate_limits": { 00:16:11.535 "rw_ios_per_sec": 0, 00:16:11.535 "rw_mbytes_per_sec": 0, 00:16:11.535 "r_mbytes_per_sec": 0, 00:16:11.535 "w_mbytes_per_sec": 0 00:16:11.535 }, 00:16:11.535 "claimed": true, 00:16:11.535 "claim_type": "exclusive_write", 00:16:11.535 "zoned": false, 00:16:11.535 "supported_io_types": { 00:16:11.535 "read": true, 00:16:11.535 "write": true, 00:16:11.535 "unmap": true, 00:16:11.535 "flush": true, 00:16:11.535 "reset": true, 00:16:11.535 "nvme_admin": false, 00:16:11.535 "nvme_io": false, 00:16:11.535 "nvme_io_md": false, 00:16:11.535 "write_zeroes": true, 00:16:11.535 "zcopy": true, 00:16:11.535 "get_zone_info": false, 00:16:11.535 "zone_management": false, 00:16:11.535 "zone_append": false, 00:16:11.535 "compare": false, 00:16:11.535 "compare_and_write": false, 00:16:11.535 "abort": true, 00:16:11.535 "seek_hole": false, 00:16:11.535 "seek_data": false, 00:16:11.535 "copy": true, 00:16:11.535 "nvme_iov_md": false 00:16:11.535 }, 00:16:11.535 "memory_domains": [ 00:16:11.535 { 00:16:11.535 "dma_device_id": "system", 00:16:11.535 "dma_device_type": 1 00:16:11.535 }, 00:16:11.535 { 00:16:11.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.535 "dma_device_type": 2 00:16:11.535 } 00:16:11.535 ], 00:16:11.535 "driver_specific": {} 00:16:11.535 }' 00:16:11.535 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.535 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.535 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:11.535 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.535 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.535 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:11.535 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.535 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.535 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:11.535 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.794 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.794 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:11.794 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:11.794 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:11.794 06:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:12.052 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:12.052 "name": "BaseBdev3", 00:16:12.052 "aliases": [ 00:16:12.052 "54cf9cde-8cf8-44b6-a98e-68a1cb105668" 00:16:12.052 ], 00:16:12.052 "product_name": "Malloc disk", 00:16:12.052 "block_size": 512, 00:16:12.052 "num_blocks": 65536, 00:16:12.052 "uuid": "54cf9cde-8cf8-44b6-a98e-68a1cb105668", 00:16:12.052 "assigned_rate_limits": { 00:16:12.052 "rw_ios_per_sec": 0, 00:16:12.052 "rw_mbytes_per_sec": 0, 00:16:12.052 "r_mbytes_per_sec": 0, 00:16:12.052 "w_mbytes_per_sec": 0 00:16:12.052 }, 00:16:12.052 "claimed": true, 00:16:12.052 "claim_type": "exclusive_write", 00:16:12.052 "zoned": false, 00:16:12.052 "supported_io_types": { 00:16:12.052 "read": true, 00:16:12.052 "write": true, 00:16:12.052 "unmap": true, 00:16:12.052 "flush": true, 00:16:12.052 "reset": true, 00:16:12.052 "nvme_admin": false, 00:16:12.052 "nvme_io": false, 00:16:12.052 "nvme_io_md": false, 00:16:12.052 "write_zeroes": true, 00:16:12.052 "zcopy": true, 00:16:12.052 "get_zone_info": false, 00:16:12.052 "zone_management": false, 00:16:12.052 "zone_append": false, 00:16:12.052 "compare": false, 00:16:12.052 "compare_and_write": false, 00:16:12.052 "abort": true, 00:16:12.052 "seek_hole": false, 00:16:12.052 "seek_data": false, 00:16:12.052 "copy": true, 00:16:12.052 "nvme_iov_md": false 00:16:12.052 }, 00:16:12.052 "memory_domains": [ 00:16:12.053 { 00:16:12.053 "dma_device_id": "system", 00:16:12.053 "dma_device_type": 1 00:16:12.053 }, 00:16:12.053 { 00:16:12.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.053 "dma_device_type": 2 00:16:12.053 } 00:16:12.053 ], 00:16:12.053 "driver_specific": {} 00:16:12.053 }' 00:16:12.053 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.053 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.053 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.053 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.053 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.053 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:12.053 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.053 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.312 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.312 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.312 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.312 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.312 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:12.312 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:12.312 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:12.570 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:12.570 "name": "BaseBdev4", 00:16:12.570 "aliases": [ 00:16:12.570 "85603806-d76c-481a-9cbe-6da95b124a9b" 00:16:12.570 ], 00:16:12.570 "product_name": "Malloc disk", 00:16:12.570 "block_size": 512, 00:16:12.570 "num_blocks": 65536, 00:16:12.570 "uuid": "85603806-d76c-481a-9cbe-6da95b124a9b", 00:16:12.570 "assigned_rate_limits": { 00:16:12.570 "rw_ios_per_sec": 0, 00:16:12.570 "rw_mbytes_per_sec": 0, 00:16:12.570 "r_mbytes_per_sec": 0, 00:16:12.570 "w_mbytes_per_sec": 0 00:16:12.570 }, 00:16:12.570 "claimed": true, 00:16:12.570 "claim_type": "exclusive_write", 00:16:12.570 "zoned": false, 00:16:12.571 "supported_io_types": { 00:16:12.571 "read": true, 00:16:12.571 "write": true, 00:16:12.571 "unmap": true, 00:16:12.571 "flush": true, 00:16:12.571 "reset": true, 00:16:12.571 "nvme_admin": false, 00:16:12.571 "nvme_io": false, 00:16:12.571 "nvme_io_md": false, 00:16:12.571 "write_zeroes": true, 00:16:12.571 "zcopy": true, 00:16:12.571 "get_zone_info": false, 00:16:12.571 "zone_management": false, 00:16:12.571 "zone_append": false, 00:16:12.571 "compare": false, 00:16:12.571 "compare_and_write": false, 00:16:12.571 "abort": true, 00:16:12.571 "seek_hole": false, 00:16:12.571 "seek_data": false, 00:16:12.571 "copy": true, 00:16:12.571 "nvme_iov_md": false 00:16:12.571 }, 00:16:12.571 "memory_domains": [ 00:16:12.571 { 00:16:12.571 "dma_device_id": "system", 00:16:12.571 "dma_device_type": 1 00:16:12.571 }, 00:16:12.571 { 00:16:12.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.571 "dma_device_type": 2 00:16:12.571 } 00:16:12.571 ], 00:16:12.571 "driver_specific": {} 00:16:12.571 }' 00:16:12.571 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.571 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.571 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.571 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.571 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.829 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:12.829 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.829 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.829 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.829 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.829 06:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.829 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.829 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:13.088 [2024-08-14 06:47:40.191468] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.088 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.347 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:13.348 "name": "Existed_Raid", 00:16:13.348 "uuid": "b8421b9a-1f80-44e5-a3bb-5409b02c2a15", 00:16:13.348 "strip_size_kb": 0, 00:16:13.348 "state": "online", 00:16:13.348 "raid_level": "raid1", 00:16:13.348 "superblock": false, 00:16:13.348 "num_base_bdevs": 4, 00:16:13.348 "num_base_bdevs_discovered": 3, 00:16:13.348 "num_base_bdevs_operational": 3, 00:16:13.348 "base_bdevs_list": [ 00:16:13.348 { 00:16:13.348 "name": null, 00:16:13.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.348 "is_configured": false, 00:16:13.348 "data_offset": 0, 00:16:13.348 "data_size": 65536 00:16:13.348 }, 00:16:13.348 { 00:16:13.348 "name": "BaseBdev2", 00:16:13.348 "uuid": "b5a24fa1-9cac-46d8-b5fc-e898e705fa24", 00:16:13.348 "is_configured": true, 00:16:13.348 "data_offset": 0, 00:16:13.348 "data_size": 65536 00:16:13.348 }, 00:16:13.348 { 00:16:13.348 "name": "BaseBdev3", 00:16:13.348 "uuid": "54cf9cde-8cf8-44b6-a98e-68a1cb105668", 00:16:13.348 "is_configured": true, 00:16:13.348 "data_offset": 0, 00:16:13.348 "data_size": 65536 00:16:13.348 }, 00:16:13.348 { 00:16:13.348 "name": "BaseBdev4", 00:16:13.348 "uuid": "85603806-d76c-481a-9cbe-6da95b124a9b", 00:16:13.348 "is_configured": true, 00:16:13.348 "data_offset": 0, 00:16:13.348 "data_size": 65536 00:16:13.348 } 00:16:13.348 ] 00:16:13.348 }' 00:16:13.348 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:13.348 06:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.915 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:13.915 06:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:13.915 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.915 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:14.173 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:14.173 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:14.173 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:14.173 [2024-08-14 06:47:41.409073] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:14.432 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:14.432 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:14.432 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.432 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:14.432 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:14.432 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:14.432 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:14.690 [2024-08-14 06:47:41.859836] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:14.690 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:14.690 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:14.690 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:14.690 06:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.948 06:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:14.948 06:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:14.948 06:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:15.206 [2024-08-14 06:47:42.302858] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:15.206 [2024-08-14 06:47:42.302968] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.206 [2024-08-14 06:47:42.314713] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.206 [2024-08-14 06:47:42.314770] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.206 [2024-08-14 06:47:42.314782] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:16:15.206 06:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:15.206 06:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:15.206 06:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:15.206 06:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.464 06:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:15.464 06:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:15.464 06:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:15.464 06:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:15.464 06:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:15.465 06:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:15.724 BaseBdev2 00:16:15.724 06:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:15.724 06:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:15.724 06:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:15.724 06:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:15.724 06:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:15.724 06:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:15.724 06:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:15.983 06:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:15.983 [ 00:16:15.983 { 00:16:15.983 "name": "BaseBdev2", 00:16:15.983 "aliases": [ 00:16:15.983 "7eb7b1b1-8b57-488d-bc1f-09554c323b10" 00:16:15.983 ], 00:16:15.983 "product_name": "Malloc disk", 00:16:15.983 "block_size": 512, 00:16:15.983 "num_blocks": 65536, 00:16:15.983 "uuid": "7eb7b1b1-8b57-488d-bc1f-09554c323b10", 00:16:15.983 "assigned_rate_limits": { 00:16:15.983 "rw_ios_per_sec": 0, 00:16:15.983 "rw_mbytes_per_sec": 0, 00:16:15.983 "r_mbytes_per_sec": 0, 00:16:15.983 "w_mbytes_per_sec": 0 00:16:15.983 }, 00:16:15.983 "claimed": false, 00:16:15.983 "zoned": false, 00:16:15.983 "supported_io_types": { 00:16:15.983 "read": true, 00:16:15.983 "write": true, 00:16:15.983 "unmap": true, 00:16:15.983 "flush": true, 00:16:15.983 "reset": true, 00:16:15.983 "nvme_admin": false, 00:16:15.983 "nvme_io": false, 00:16:15.983 "nvme_io_md": false, 00:16:15.983 "write_zeroes": true, 00:16:15.983 "zcopy": true, 00:16:15.983 "get_zone_info": false, 00:16:15.983 "zone_management": false, 00:16:15.983 "zone_append": false, 00:16:15.983 "compare": false, 00:16:15.983 "compare_and_write": false, 00:16:15.983 "abort": true, 00:16:15.983 "seek_hole": false, 00:16:15.983 "seek_data": false, 00:16:15.983 "copy": true, 00:16:15.983 "nvme_iov_md": false 00:16:15.983 }, 00:16:15.983 "memory_domains": [ 00:16:15.983 { 00:16:15.983 "dma_device_id": "system", 00:16:15.983 "dma_device_type": 1 00:16:15.983 }, 00:16:15.983 { 00:16:15.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.983 "dma_device_type": 2 00:16:15.983 } 00:16:15.983 ], 00:16:15.983 "driver_specific": {} 00:16:15.983 } 00:16:15.983 ] 00:16:15.983 06:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:15.983 06:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:15.983 06:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:15.983 06:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:16.243 BaseBdev3 00:16:16.243 06:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:16.243 06:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:16.243 06:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:16.243 06:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:16.243 06:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:16.243 06:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:16.243 06:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:16.502 06:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:16.761 [ 00:16:16.761 { 00:16:16.761 "name": "BaseBdev3", 00:16:16.761 "aliases": [ 00:16:16.761 "c8762f0c-361b-4bc5-9708-5e0ccdad9b7a" 00:16:16.761 ], 00:16:16.761 "product_name": "Malloc disk", 00:16:16.761 "block_size": 512, 00:16:16.761 "num_blocks": 65536, 00:16:16.761 "uuid": "c8762f0c-361b-4bc5-9708-5e0ccdad9b7a", 00:16:16.761 "assigned_rate_limits": { 00:16:16.761 "rw_ios_per_sec": 0, 00:16:16.761 "rw_mbytes_per_sec": 0, 00:16:16.761 "r_mbytes_per_sec": 0, 00:16:16.761 "w_mbytes_per_sec": 0 00:16:16.761 }, 00:16:16.761 "claimed": false, 00:16:16.761 "zoned": false, 00:16:16.761 "supported_io_types": { 00:16:16.761 "read": true, 00:16:16.761 "write": true, 00:16:16.761 "unmap": true, 00:16:16.761 "flush": true, 00:16:16.761 "reset": true, 00:16:16.761 "nvme_admin": false, 00:16:16.761 "nvme_io": false, 00:16:16.761 "nvme_io_md": false, 00:16:16.761 "write_zeroes": true, 00:16:16.761 "zcopy": true, 00:16:16.761 "get_zone_info": false, 00:16:16.761 "zone_management": false, 00:16:16.761 "zone_append": false, 00:16:16.761 "compare": false, 00:16:16.761 "compare_and_write": false, 00:16:16.761 "abort": true, 00:16:16.761 "seek_hole": false, 00:16:16.761 "seek_data": false, 00:16:16.761 "copy": true, 00:16:16.761 "nvme_iov_md": false 00:16:16.761 }, 00:16:16.761 "memory_domains": [ 00:16:16.761 { 00:16:16.761 "dma_device_id": "system", 00:16:16.761 "dma_device_type": 1 00:16:16.761 }, 00:16:16.761 { 00:16:16.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.761 "dma_device_type": 2 00:16:16.761 } 00:16:16.761 ], 00:16:16.761 "driver_specific": {} 00:16:16.761 } 00:16:16.761 ] 00:16:16.761 06:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:16.761 06:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:16.762 06:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:16.762 06:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:16.762 BaseBdev4 00:16:17.021 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:17.021 06:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:16:17.021 06:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:17.021 06:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:17.021 06:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:17.021 06:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:17.021 06:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:17.021 06:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:17.281 [ 00:16:17.281 { 00:16:17.281 "name": "BaseBdev4", 00:16:17.281 "aliases": [ 00:16:17.281 "9bb47f79-6127-4c9e-a449-7647e9501554" 00:16:17.281 ], 00:16:17.281 "product_name": "Malloc disk", 00:16:17.281 "block_size": 512, 00:16:17.281 "num_blocks": 65536, 00:16:17.281 "uuid": "9bb47f79-6127-4c9e-a449-7647e9501554", 00:16:17.281 "assigned_rate_limits": { 00:16:17.281 "rw_ios_per_sec": 0, 00:16:17.281 "rw_mbytes_per_sec": 0, 00:16:17.281 "r_mbytes_per_sec": 0, 00:16:17.281 "w_mbytes_per_sec": 0 00:16:17.281 }, 00:16:17.281 "claimed": false, 00:16:17.281 "zoned": false, 00:16:17.281 "supported_io_types": { 00:16:17.281 "read": true, 00:16:17.281 "write": true, 00:16:17.281 "unmap": true, 00:16:17.281 "flush": true, 00:16:17.281 "reset": true, 00:16:17.281 "nvme_admin": false, 00:16:17.281 "nvme_io": false, 00:16:17.281 "nvme_io_md": false, 00:16:17.281 "write_zeroes": true, 00:16:17.281 "zcopy": true, 00:16:17.281 "get_zone_info": false, 00:16:17.281 "zone_management": false, 00:16:17.281 "zone_append": false, 00:16:17.281 "compare": false, 00:16:17.281 "compare_and_write": false, 00:16:17.281 "abort": true, 00:16:17.281 "seek_hole": false, 00:16:17.281 "seek_data": false, 00:16:17.281 "copy": true, 00:16:17.281 "nvme_iov_md": false 00:16:17.281 }, 00:16:17.281 "memory_domains": [ 00:16:17.281 { 00:16:17.281 "dma_device_id": "system", 00:16:17.281 "dma_device_type": 1 00:16:17.281 }, 00:16:17.281 { 00:16:17.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.281 "dma_device_type": 2 00:16:17.281 } 00:16:17.281 ], 00:16:17.281 "driver_specific": {} 00:16:17.281 } 00:16:17.281 ] 00:16:17.281 06:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:17.281 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:17.281 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:17.281 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:17.541 [2024-08-14 06:47:44.581445] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.541 [2024-08-14 06:47:44.581560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.541 [2024-08-14 06:47:44.581596] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:17.541 [2024-08-14 06:47:44.583621] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:17.541 [2024-08-14 06:47:44.583706] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:17.541 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:17.541 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:17.541 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:17.541 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:17.541 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:17.541 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:17.541 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:17.541 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:17.541 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:17.541 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:17.541 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.541 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.801 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.801 "name": "Existed_Raid", 00:16:17.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.801 "strip_size_kb": 0, 00:16:17.801 "state": "configuring", 00:16:17.801 "raid_level": "raid1", 00:16:17.801 "superblock": false, 00:16:17.801 "num_base_bdevs": 4, 00:16:17.801 "num_base_bdevs_discovered": 3, 00:16:17.801 "num_base_bdevs_operational": 4, 00:16:17.801 "base_bdevs_list": [ 00:16:17.801 { 00:16:17.801 "name": "BaseBdev1", 00:16:17.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.801 "is_configured": false, 00:16:17.801 "data_offset": 0, 00:16:17.801 "data_size": 0 00:16:17.801 }, 00:16:17.801 { 00:16:17.801 "name": "BaseBdev2", 00:16:17.801 "uuid": "7eb7b1b1-8b57-488d-bc1f-09554c323b10", 00:16:17.801 "is_configured": true, 00:16:17.801 "data_offset": 0, 00:16:17.801 "data_size": 65536 00:16:17.801 }, 00:16:17.801 { 00:16:17.801 "name": "BaseBdev3", 00:16:17.801 "uuid": "c8762f0c-361b-4bc5-9708-5e0ccdad9b7a", 00:16:17.801 "is_configured": true, 00:16:17.801 "data_offset": 0, 00:16:17.801 "data_size": 65536 00:16:17.801 }, 00:16:17.801 { 00:16:17.801 "name": "BaseBdev4", 00:16:17.801 "uuid": "9bb47f79-6127-4c9e-a449-7647e9501554", 00:16:17.801 "is_configured": true, 00:16:17.801 "data_offset": 0, 00:16:17.801 "data_size": 65536 00:16:17.801 } 00:16:17.801 ] 00:16:17.801 }' 00:16:17.801 06:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.801 06:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.371 06:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:18.371 [2024-08-14 06:47:45.519857] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:18.371 06:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:18.371 06:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:18.371 06:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:18.371 06:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:18.371 06:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:18.371 06:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:18.371 06:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.371 06:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.371 06:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.371 06:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.371 06:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.371 06:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.637 06:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.637 "name": "Existed_Raid", 00:16:18.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.637 "strip_size_kb": 0, 00:16:18.637 "state": "configuring", 00:16:18.637 "raid_level": "raid1", 00:16:18.637 "superblock": false, 00:16:18.637 "num_base_bdevs": 4, 00:16:18.637 "num_base_bdevs_discovered": 2, 00:16:18.637 "num_base_bdevs_operational": 4, 00:16:18.637 "base_bdevs_list": [ 00:16:18.637 { 00:16:18.637 "name": "BaseBdev1", 00:16:18.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.637 "is_configured": false, 00:16:18.637 "data_offset": 0, 00:16:18.637 "data_size": 0 00:16:18.637 }, 00:16:18.637 { 00:16:18.637 "name": null, 00:16:18.637 "uuid": "7eb7b1b1-8b57-488d-bc1f-09554c323b10", 00:16:18.637 "is_configured": false, 00:16:18.637 "data_offset": 0, 00:16:18.637 "data_size": 65536 00:16:18.637 }, 00:16:18.637 { 00:16:18.637 "name": "BaseBdev3", 00:16:18.637 "uuid": "c8762f0c-361b-4bc5-9708-5e0ccdad9b7a", 00:16:18.637 "is_configured": true, 00:16:18.637 "data_offset": 0, 00:16:18.637 "data_size": 65536 00:16:18.637 }, 00:16:18.637 { 00:16:18.637 "name": "BaseBdev4", 00:16:18.637 "uuid": "9bb47f79-6127-4c9e-a449-7647e9501554", 00:16:18.637 "is_configured": true, 00:16:18.637 "data_offset": 0, 00:16:18.637 "data_size": 65536 00:16:18.637 } 00:16:18.637 ] 00:16:18.637 }' 00:16:18.637 06:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.637 06:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.216 06:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.216 06:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:19.474 06:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:19.475 06:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:19.475 [2024-08-14 06:47:46.712804] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.475 BaseBdev1 00:16:19.733 06:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:19.733 06:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:19.733 06:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:19.733 06:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:19.733 06:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:19.733 06:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:19.733 06:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:19.733 06:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:19.991 [ 00:16:19.991 { 00:16:19.991 "name": "BaseBdev1", 00:16:19.991 "aliases": [ 00:16:19.991 "6136c651-a066-4534-9d2d-cbc06b29e9e1" 00:16:19.991 ], 00:16:19.991 "product_name": "Malloc disk", 00:16:19.991 "block_size": 512, 00:16:19.991 "num_blocks": 65536, 00:16:19.991 "uuid": "6136c651-a066-4534-9d2d-cbc06b29e9e1", 00:16:19.991 "assigned_rate_limits": { 00:16:19.991 "rw_ios_per_sec": 0, 00:16:19.991 "rw_mbytes_per_sec": 0, 00:16:19.991 "r_mbytes_per_sec": 0, 00:16:19.991 "w_mbytes_per_sec": 0 00:16:19.991 }, 00:16:19.991 "claimed": true, 00:16:19.991 "claim_type": "exclusive_write", 00:16:19.991 "zoned": false, 00:16:19.991 "supported_io_types": { 00:16:19.991 "read": true, 00:16:19.991 "write": true, 00:16:19.991 "unmap": true, 00:16:19.991 "flush": true, 00:16:19.991 "reset": true, 00:16:19.991 "nvme_admin": false, 00:16:19.991 "nvme_io": false, 00:16:19.991 "nvme_io_md": false, 00:16:19.991 "write_zeroes": true, 00:16:19.991 "zcopy": true, 00:16:19.991 "get_zone_info": false, 00:16:19.991 "zone_management": false, 00:16:19.991 "zone_append": false, 00:16:19.991 "compare": false, 00:16:19.991 "compare_and_write": false, 00:16:19.991 "abort": true, 00:16:19.991 "seek_hole": false, 00:16:19.991 "seek_data": false, 00:16:19.991 "copy": true, 00:16:19.991 "nvme_iov_md": false 00:16:19.991 }, 00:16:19.991 "memory_domains": [ 00:16:19.991 { 00:16:19.991 "dma_device_id": "system", 00:16:19.991 "dma_device_type": 1 00:16:19.991 }, 00:16:19.991 { 00:16:19.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.991 "dma_device_type": 2 00:16:19.991 } 00:16:19.991 ], 00:16:19.991 "driver_specific": {} 00:16:19.991 } 00:16:19.991 ] 00:16:19.991 06:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:19.991 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:19.991 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:19.991 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:19.991 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:19.991 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:19.992 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:19.992 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.992 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.992 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.992 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.992 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.992 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.249 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.249 "name": "Existed_Raid", 00:16:20.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.249 "strip_size_kb": 0, 00:16:20.249 "state": "configuring", 00:16:20.249 "raid_level": "raid1", 00:16:20.249 "superblock": false, 00:16:20.249 "num_base_bdevs": 4, 00:16:20.249 "num_base_bdevs_discovered": 3, 00:16:20.249 "num_base_bdevs_operational": 4, 00:16:20.249 "base_bdevs_list": [ 00:16:20.249 { 00:16:20.249 "name": "BaseBdev1", 00:16:20.249 "uuid": "6136c651-a066-4534-9d2d-cbc06b29e9e1", 00:16:20.249 "is_configured": true, 00:16:20.249 "data_offset": 0, 00:16:20.249 "data_size": 65536 00:16:20.249 }, 00:16:20.249 { 00:16:20.249 "name": null, 00:16:20.249 "uuid": "7eb7b1b1-8b57-488d-bc1f-09554c323b10", 00:16:20.249 "is_configured": false, 00:16:20.249 "data_offset": 0, 00:16:20.249 "data_size": 65536 00:16:20.249 }, 00:16:20.249 { 00:16:20.249 "name": "BaseBdev3", 00:16:20.249 "uuid": "c8762f0c-361b-4bc5-9708-5e0ccdad9b7a", 00:16:20.249 "is_configured": true, 00:16:20.249 "data_offset": 0, 00:16:20.249 "data_size": 65536 00:16:20.249 }, 00:16:20.249 { 00:16:20.249 "name": "BaseBdev4", 00:16:20.249 "uuid": "9bb47f79-6127-4c9e-a449-7647e9501554", 00:16:20.249 "is_configured": true, 00:16:20.249 "data_offset": 0, 00:16:20.249 "data_size": 65536 00:16:20.249 } 00:16:20.249 ] 00:16:20.249 }' 00:16:20.249 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.249 06:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.815 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:20.815 06:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.815 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:20.815 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:21.073 [2024-08-14 06:47:48.226345] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:21.073 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:21.073 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:21.073 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:21.073 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:21.074 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:21.074 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:21.074 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.074 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.074 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.074 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.074 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.074 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.331 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.331 "name": "Existed_Raid", 00:16:21.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.332 "strip_size_kb": 0, 00:16:21.332 "state": "configuring", 00:16:21.332 "raid_level": "raid1", 00:16:21.332 "superblock": false, 00:16:21.332 "num_base_bdevs": 4, 00:16:21.332 "num_base_bdevs_discovered": 2, 00:16:21.332 "num_base_bdevs_operational": 4, 00:16:21.332 "base_bdevs_list": [ 00:16:21.332 { 00:16:21.332 "name": "BaseBdev1", 00:16:21.332 "uuid": "6136c651-a066-4534-9d2d-cbc06b29e9e1", 00:16:21.332 "is_configured": true, 00:16:21.332 "data_offset": 0, 00:16:21.332 "data_size": 65536 00:16:21.332 }, 00:16:21.332 { 00:16:21.332 "name": null, 00:16:21.332 "uuid": "7eb7b1b1-8b57-488d-bc1f-09554c323b10", 00:16:21.332 "is_configured": false, 00:16:21.332 "data_offset": 0, 00:16:21.332 "data_size": 65536 00:16:21.332 }, 00:16:21.332 { 00:16:21.332 "name": null, 00:16:21.332 "uuid": "c8762f0c-361b-4bc5-9708-5e0ccdad9b7a", 00:16:21.332 "is_configured": false, 00:16:21.332 "data_offset": 0, 00:16:21.332 "data_size": 65536 00:16:21.332 }, 00:16:21.332 { 00:16:21.332 "name": "BaseBdev4", 00:16:21.332 "uuid": "9bb47f79-6127-4c9e-a449-7647e9501554", 00:16:21.332 "is_configured": true, 00:16:21.332 "data_offset": 0, 00:16:21.332 "data_size": 65536 00:16:21.332 } 00:16:21.332 ] 00:16:21.332 }' 00:16:21.332 06:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.332 06:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.897 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.897 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:22.155 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:22.155 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:22.413 [2024-08-14 06:47:49.444370] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.414 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:22.414 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:22.414 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:22.414 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:22.414 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:22.414 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:22.414 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.414 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.414 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.414 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.414 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.414 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.672 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.672 "name": "Existed_Raid", 00:16:22.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.672 "strip_size_kb": 0, 00:16:22.672 "state": "configuring", 00:16:22.672 "raid_level": "raid1", 00:16:22.672 "superblock": false, 00:16:22.672 "num_base_bdevs": 4, 00:16:22.672 "num_base_bdevs_discovered": 3, 00:16:22.672 "num_base_bdevs_operational": 4, 00:16:22.672 "base_bdevs_list": [ 00:16:22.672 { 00:16:22.672 "name": "BaseBdev1", 00:16:22.672 "uuid": "6136c651-a066-4534-9d2d-cbc06b29e9e1", 00:16:22.672 "is_configured": true, 00:16:22.672 "data_offset": 0, 00:16:22.672 "data_size": 65536 00:16:22.672 }, 00:16:22.672 { 00:16:22.672 "name": null, 00:16:22.672 "uuid": "7eb7b1b1-8b57-488d-bc1f-09554c323b10", 00:16:22.672 "is_configured": false, 00:16:22.672 "data_offset": 0, 00:16:22.672 "data_size": 65536 00:16:22.672 }, 00:16:22.672 { 00:16:22.672 "name": "BaseBdev3", 00:16:22.672 "uuid": "c8762f0c-361b-4bc5-9708-5e0ccdad9b7a", 00:16:22.672 "is_configured": true, 00:16:22.672 "data_offset": 0, 00:16:22.672 "data_size": 65536 00:16:22.672 }, 00:16:22.672 { 00:16:22.672 "name": "BaseBdev4", 00:16:22.672 "uuid": "9bb47f79-6127-4c9e-a449-7647e9501554", 00:16:22.672 "is_configured": true, 00:16:22.672 "data_offset": 0, 00:16:22.672 "data_size": 65536 00:16:22.672 } 00:16:22.672 ] 00:16:22.672 }' 00:16:22.672 06:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.672 06:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.239 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.239 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:23.239 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:23.239 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:23.497 [2024-08-14 06:47:50.662300] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:23.497 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:23.497 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:23.497 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:23.497 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:23.497 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:23.497 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:23.497 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.497 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.497 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.497 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.497 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.497 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.754 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.755 "name": "Existed_Raid", 00:16:23.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.755 "strip_size_kb": 0, 00:16:23.755 "state": "configuring", 00:16:23.755 "raid_level": "raid1", 00:16:23.755 "superblock": false, 00:16:23.755 "num_base_bdevs": 4, 00:16:23.755 "num_base_bdevs_discovered": 2, 00:16:23.755 "num_base_bdevs_operational": 4, 00:16:23.755 "base_bdevs_list": [ 00:16:23.755 { 00:16:23.755 "name": null, 00:16:23.755 "uuid": "6136c651-a066-4534-9d2d-cbc06b29e9e1", 00:16:23.755 "is_configured": false, 00:16:23.755 "data_offset": 0, 00:16:23.755 "data_size": 65536 00:16:23.755 }, 00:16:23.755 { 00:16:23.755 "name": null, 00:16:23.755 "uuid": "7eb7b1b1-8b57-488d-bc1f-09554c323b10", 00:16:23.755 "is_configured": false, 00:16:23.755 "data_offset": 0, 00:16:23.755 "data_size": 65536 00:16:23.755 }, 00:16:23.755 { 00:16:23.755 "name": "BaseBdev3", 00:16:23.755 "uuid": "c8762f0c-361b-4bc5-9708-5e0ccdad9b7a", 00:16:23.755 "is_configured": true, 00:16:23.755 "data_offset": 0, 00:16:23.755 "data_size": 65536 00:16:23.755 }, 00:16:23.755 { 00:16:23.755 "name": "BaseBdev4", 00:16:23.755 "uuid": "9bb47f79-6127-4c9e-a449-7647e9501554", 00:16:23.755 "is_configured": true, 00:16:23.755 "data_offset": 0, 00:16:23.755 "data_size": 65536 00:16:23.755 } 00:16:23.755 ] 00:16:23.755 }' 00:16:23.755 06:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.755 06:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.323 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.323 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:24.582 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:24.582 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:24.841 [2024-08-14 06:47:51.954777] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.841 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:24.841 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:24.841 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:24.841 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:24.841 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:24.841 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:24.841 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.841 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.841 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.841 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.841 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.841 06:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.101 06:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.101 "name": "Existed_Raid", 00:16:25.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.101 "strip_size_kb": 0, 00:16:25.101 "state": "configuring", 00:16:25.101 "raid_level": "raid1", 00:16:25.101 "superblock": false, 00:16:25.101 "num_base_bdevs": 4, 00:16:25.101 "num_base_bdevs_discovered": 3, 00:16:25.101 "num_base_bdevs_operational": 4, 00:16:25.101 "base_bdevs_list": [ 00:16:25.101 { 00:16:25.101 "name": null, 00:16:25.101 "uuid": "6136c651-a066-4534-9d2d-cbc06b29e9e1", 00:16:25.101 "is_configured": false, 00:16:25.101 "data_offset": 0, 00:16:25.101 "data_size": 65536 00:16:25.101 }, 00:16:25.101 { 00:16:25.101 "name": "BaseBdev2", 00:16:25.101 "uuid": "7eb7b1b1-8b57-488d-bc1f-09554c323b10", 00:16:25.101 "is_configured": true, 00:16:25.101 "data_offset": 0, 00:16:25.101 "data_size": 65536 00:16:25.101 }, 00:16:25.101 { 00:16:25.101 "name": "BaseBdev3", 00:16:25.101 "uuid": "c8762f0c-361b-4bc5-9708-5e0ccdad9b7a", 00:16:25.101 "is_configured": true, 00:16:25.101 "data_offset": 0, 00:16:25.101 "data_size": 65536 00:16:25.101 }, 00:16:25.101 { 00:16:25.101 "name": "BaseBdev4", 00:16:25.101 "uuid": "9bb47f79-6127-4c9e-a449-7647e9501554", 00:16:25.101 "is_configured": true, 00:16:25.101 "data_offset": 0, 00:16:25.101 "data_size": 65536 00:16:25.101 } 00:16:25.101 ] 00:16:25.101 }' 00:16:25.101 06:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.101 06:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.671 06:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.671 06:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:25.931 06:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:25.931 06:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.931 06:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:25.931 06:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6136c651-a066-4534-9d2d-cbc06b29e9e1 00:16:26.189 [2024-08-14 06:47:53.359498] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:26.189 [2024-08-14 06:47:53.359559] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:16:26.189 [2024-08-14 06:47:53.359568] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:26.189 [2024-08-14 06:47:53.359825] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:16:26.189 [2024-08-14 06:47:53.359959] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:16:26.189 [2024-08-14 06:47:53.359979] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:16:26.189 [2024-08-14 06:47:53.360157] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.189 NewBaseBdev 00:16:26.189 06:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:26.189 06:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:16:26.189 06:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:26.189 06:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:26.189 06:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:26.189 06:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:26.189 06:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:26.448 06:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:26.711 [ 00:16:26.711 { 00:16:26.711 "name": "NewBaseBdev", 00:16:26.711 "aliases": [ 00:16:26.711 "6136c651-a066-4534-9d2d-cbc06b29e9e1" 00:16:26.711 ], 00:16:26.711 "product_name": "Malloc disk", 00:16:26.712 "block_size": 512, 00:16:26.712 "num_blocks": 65536, 00:16:26.712 "uuid": "6136c651-a066-4534-9d2d-cbc06b29e9e1", 00:16:26.712 "assigned_rate_limits": { 00:16:26.712 "rw_ios_per_sec": 0, 00:16:26.712 "rw_mbytes_per_sec": 0, 00:16:26.712 "r_mbytes_per_sec": 0, 00:16:26.712 "w_mbytes_per_sec": 0 00:16:26.712 }, 00:16:26.712 "claimed": true, 00:16:26.712 "claim_type": "exclusive_write", 00:16:26.712 "zoned": false, 00:16:26.712 "supported_io_types": { 00:16:26.712 "read": true, 00:16:26.712 "write": true, 00:16:26.712 "unmap": true, 00:16:26.712 "flush": true, 00:16:26.712 "reset": true, 00:16:26.712 "nvme_admin": false, 00:16:26.712 "nvme_io": false, 00:16:26.712 "nvme_io_md": false, 00:16:26.712 "write_zeroes": true, 00:16:26.712 "zcopy": true, 00:16:26.712 "get_zone_info": false, 00:16:26.712 "zone_management": false, 00:16:26.712 "zone_append": false, 00:16:26.712 "compare": false, 00:16:26.712 "compare_and_write": false, 00:16:26.712 "abort": true, 00:16:26.712 "seek_hole": false, 00:16:26.712 "seek_data": false, 00:16:26.712 "copy": true, 00:16:26.712 "nvme_iov_md": false 00:16:26.712 }, 00:16:26.712 "memory_domains": [ 00:16:26.712 { 00:16:26.712 "dma_device_id": "system", 00:16:26.712 "dma_device_type": 1 00:16:26.712 }, 00:16:26.712 { 00:16:26.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.712 "dma_device_type": 2 00:16:26.712 } 00:16:26.712 ], 00:16:26.712 "driver_specific": {} 00:16:26.712 } 00:16:26.712 ] 00:16:26.712 06:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:26.712 06:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:26.712 06:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:26.712 06:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:26.712 06:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:26.712 06:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:26.712 06:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:26.712 06:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:26.712 06:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:26.712 06:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:26.712 06:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:26.712 06:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.712 06:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.969 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.969 "name": "Existed_Raid", 00:16:26.969 "uuid": "1b240520-7fad-4807-b832-e8d6236216d1", 00:16:26.969 "strip_size_kb": 0, 00:16:26.969 "state": "online", 00:16:26.969 "raid_level": "raid1", 00:16:26.969 "superblock": false, 00:16:26.969 "num_base_bdevs": 4, 00:16:26.969 "num_base_bdevs_discovered": 4, 00:16:26.969 "num_base_bdevs_operational": 4, 00:16:26.969 "base_bdevs_list": [ 00:16:26.969 { 00:16:26.969 "name": "NewBaseBdev", 00:16:26.969 "uuid": "6136c651-a066-4534-9d2d-cbc06b29e9e1", 00:16:26.969 "is_configured": true, 00:16:26.969 "data_offset": 0, 00:16:26.969 "data_size": 65536 00:16:26.969 }, 00:16:26.969 { 00:16:26.969 "name": "BaseBdev2", 00:16:26.969 "uuid": "7eb7b1b1-8b57-488d-bc1f-09554c323b10", 00:16:26.969 "is_configured": true, 00:16:26.969 "data_offset": 0, 00:16:26.969 "data_size": 65536 00:16:26.969 }, 00:16:26.969 { 00:16:26.969 "name": "BaseBdev3", 00:16:26.969 "uuid": "c8762f0c-361b-4bc5-9708-5e0ccdad9b7a", 00:16:26.969 "is_configured": true, 00:16:26.969 "data_offset": 0, 00:16:26.969 "data_size": 65536 00:16:26.969 }, 00:16:26.969 { 00:16:26.969 "name": "BaseBdev4", 00:16:26.969 "uuid": "9bb47f79-6127-4c9e-a449-7647e9501554", 00:16:26.969 "is_configured": true, 00:16:26.969 "data_offset": 0, 00:16:26.969 "data_size": 65536 00:16:26.969 } 00:16:26.969 ] 00:16:26.969 }' 00:16:26.969 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.969 06:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.537 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:27.537 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:27.537 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:27.537 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:27.537 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:27.537 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:27.537 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:27.537 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:27.797 [2024-08-14 06:47:54.829572] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.797 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:27.797 "name": "Existed_Raid", 00:16:27.797 "aliases": [ 00:16:27.797 "1b240520-7fad-4807-b832-e8d6236216d1" 00:16:27.797 ], 00:16:27.797 "product_name": "Raid Volume", 00:16:27.797 "block_size": 512, 00:16:27.797 "num_blocks": 65536, 00:16:27.797 "uuid": "1b240520-7fad-4807-b832-e8d6236216d1", 00:16:27.797 "assigned_rate_limits": { 00:16:27.797 "rw_ios_per_sec": 0, 00:16:27.797 "rw_mbytes_per_sec": 0, 00:16:27.797 "r_mbytes_per_sec": 0, 00:16:27.797 "w_mbytes_per_sec": 0 00:16:27.797 }, 00:16:27.797 "claimed": false, 00:16:27.797 "zoned": false, 00:16:27.797 "supported_io_types": { 00:16:27.797 "read": true, 00:16:27.797 "write": true, 00:16:27.797 "unmap": false, 00:16:27.797 "flush": false, 00:16:27.797 "reset": true, 00:16:27.797 "nvme_admin": false, 00:16:27.797 "nvme_io": false, 00:16:27.797 "nvme_io_md": false, 00:16:27.797 "write_zeroes": true, 00:16:27.797 "zcopy": false, 00:16:27.797 "get_zone_info": false, 00:16:27.797 "zone_management": false, 00:16:27.797 "zone_append": false, 00:16:27.797 "compare": false, 00:16:27.797 "compare_and_write": false, 00:16:27.797 "abort": false, 00:16:27.797 "seek_hole": false, 00:16:27.797 "seek_data": false, 00:16:27.797 "copy": false, 00:16:27.797 "nvme_iov_md": false 00:16:27.797 }, 00:16:27.797 "memory_domains": [ 00:16:27.797 { 00:16:27.797 "dma_device_id": "system", 00:16:27.797 "dma_device_type": 1 00:16:27.797 }, 00:16:27.797 { 00:16:27.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.797 "dma_device_type": 2 00:16:27.797 }, 00:16:27.797 { 00:16:27.797 "dma_device_id": "system", 00:16:27.797 "dma_device_type": 1 00:16:27.797 }, 00:16:27.797 { 00:16:27.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.797 "dma_device_type": 2 00:16:27.797 }, 00:16:27.797 { 00:16:27.797 "dma_device_id": "system", 00:16:27.797 "dma_device_type": 1 00:16:27.797 }, 00:16:27.797 { 00:16:27.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.797 "dma_device_type": 2 00:16:27.797 }, 00:16:27.797 { 00:16:27.797 "dma_device_id": "system", 00:16:27.797 "dma_device_type": 1 00:16:27.797 }, 00:16:27.797 { 00:16:27.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.797 "dma_device_type": 2 00:16:27.797 } 00:16:27.797 ], 00:16:27.797 "driver_specific": { 00:16:27.797 "raid": { 00:16:27.797 "uuid": "1b240520-7fad-4807-b832-e8d6236216d1", 00:16:27.797 "strip_size_kb": 0, 00:16:27.797 "state": "online", 00:16:27.797 "raid_level": "raid1", 00:16:27.797 "superblock": false, 00:16:27.797 "num_base_bdevs": 4, 00:16:27.797 "num_base_bdevs_discovered": 4, 00:16:27.797 "num_base_bdevs_operational": 4, 00:16:27.797 "base_bdevs_list": [ 00:16:27.797 { 00:16:27.797 "name": "NewBaseBdev", 00:16:27.797 "uuid": "6136c651-a066-4534-9d2d-cbc06b29e9e1", 00:16:27.797 "is_configured": true, 00:16:27.797 "data_offset": 0, 00:16:27.797 "data_size": 65536 00:16:27.797 }, 00:16:27.797 { 00:16:27.797 "name": "BaseBdev2", 00:16:27.797 "uuid": "7eb7b1b1-8b57-488d-bc1f-09554c323b10", 00:16:27.797 "is_configured": true, 00:16:27.797 "data_offset": 0, 00:16:27.797 "data_size": 65536 00:16:27.797 }, 00:16:27.797 { 00:16:27.797 "name": "BaseBdev3", 00:16:27.797 "uuid": "c8762f0c-361b-4bc5-9708-5e0ccdad9b7a", 00:16:27.797 "is_configured": true, 00:16:27.797 "data_offset": 0, 00:16:27.797 "data_size": 65536 00:16:27.797 }, 00:16:27.797 { 00:16:27.797 "name": "BaseBdev4", 00:16:27.797 "uuid": "9bb47f79-6127-4c9e-a449-7647e9501554", 00:16:27.797 "is_configured": true, 00:16:27.797 "data_offset": 0, 00:16:27.797 "data_size": 65536 00:16:27.797 } 00:16:27.797 ] 00:16:27.797 } 00:16:27.797 } 00:16:27.797 }' 00:16:27.797 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.797 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:27.797 BaseBdev2 00:16:27.797 BaseBdev3 00:16:27.797 BaseBdev4' 00:16:27.797 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:27.797 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:27.797 06:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:28.056 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:28.056 "name": "NewBaseBdev", 00:16:28.056 "aliases": [ 00:16:28.056 "6136c651-a066-4534-9d2d-cbc06b29e9e1" 00:16:28.056 ], 00:16:28.056 "product_name": "Malloc disk", 00:16:28.056 "block_size": 512, 00:16:28.056 "num_blocks": 65536, 00:16:28.056 "uuid": "6136c651-a066-4534-9d2d-cbc06b29e9e1", 00:16:28.056 "assigned_rate_limits": { 00:16:28.056 "rw_ios_per_sec": 0, 00:16:28.056 "rw_mbytes_per_sec": 0, 00:16:28.056 "r_mbytes_per_sec": 0, 00:16:28.056 "w_mbytes_per_sec": 0 00:16:28.056 }, 00:16:28.056 "claimed": true, 00:16:28.056 "claim_type": "exclusive_write", 00:16:28.056 "zoned": false, 00:16:28.056 "supported_io_types": { 00:16:28.056 "read": true, 00:16:28.056 "write": true, 00:16:28.056 "unmap": true, 00:16:28.056 "flush": true, 00:16:28.056 "reset": true, 00:16:28.056 "nvme_admin": false, 00:16:28.056 "nvme_io": false, 00:16:28.056 "nvme_io_md": false, 00:16:28.056 "write_zeroes": true, 00:16:28.056 "zcopy": true, 00:16:28.056 "get_zone_info": false, 00:16:28.056 "zone_management": false, 00:16:28.056 "zone_append": false, 00:16:28.056 "compare": false, 00:16:28.056 "compare_and_write": false, 00:16:28.056 "abort": true, 00:16:28.056 "seek_hole": false, 00:16:28.056 "seek_data": false, 00:16:28.056 "copy": true, 00:16:28.056 "nvme_iov_md": false 00:16:28.056 }, 00:16:28.056 "memory_domains": [ 00:16:28.056 { 00:16:28.056 "dma_device_id": "system", 00:16:28.056 "dma_device_type": 1 00:16:28.056 }, 00:16:28.056 { 00:16:28.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.056 "dma_device_type": 2 00:16:28.056 } 00:16:28.056 ], 00:16:28.056 "driver_specific": {} 00:16:28.056 }' 00:16:28.056 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.056 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.056 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:28.056 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.056 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.056 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:28.056 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.056 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.315 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:28.315 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:28.315 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:28.315 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:28.315 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:28.315 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:28.315 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:28.574 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:28.574 "name": "BaseBdev2", 00:16:28.574 "aliases": [ 00:16:28.574 "7eb7b1b1-8b57-488d-bc1f-09554c323b10" 00:16:28.574 ], 00:16:28.574 "product_name": "Malloc disk", 00:16:28.574 "block_size": 512, 00:16:28.574 "num_blocks": 65536, 00:16:28.574 "uuid": "7eb7b1b1-8b57-488d-bc1f-09554c323b10", 00:16:28.574 "assigned_rate_limits": { 00:16:28.574 "rw_ios_per_sec": 0, 00:16:28.574 "rw_mbytes_per_sec": 0, 00:16:28.574 "r_mbytes_per_sec": 0, 00:16:28.574 "w_mbytes_per_sec": 0 00:16:28.574 }, 00:16:28.574 "claimed": true, 00:16:28.574 "claim_type": "exclusive_write", 00:16:28.574 "zoned": false, 00:16:28.574 "supported_io_types": { 00:16:28.574 "read": true, 00:16:28.574 "write": true, 00:16:28.574 "unmap": true, 00:16:28.574 "flush": true, 00:16:28.574 "reset": true, 00:16:28.574 "nvme_admin": false, 00:16:28.574 "nvme_io": false, 00:16:28.574 "nvme_io_md": false, 00:16:28.574 "write_zeroes": true, 00:16:28.574 "zcopy": true, 00:16:28.574 "get_zone_info": false, 00:16:28.574 "zone_management": false, 00:16:28.574 "zone_append": false, 00:16:28.574 "compare": false, 00:16:28.574 "compare_and_write": false, 00:16:28.574 "abort": true, 00:16:28.574 "seek_hole": false, 00:16:28.574 "seek_data": false, 00:16:28.574 "copy": true, 00:16:28.574 "nvme_iov_md": false 00:16:28.574 }, 00:16:28.574 "memory_domains": [ 00:16:28.574 { 00:16:28.574 "dma_device_id": "system", 00:16:28.574 "dma_device_type": 1 00:16:28.574 }, 00:16:28.574 { 00:16:28.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.574 "dma_device_type": 2 00:16:28.574 } 00:16:28.574 ], 00:16:28.574 "driver_specific": {} 00:16:28.574 }' 00:16:28.574 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.574 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.574 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:28.574 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.574 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.574 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:28.574 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.574 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.832 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:28.832 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:28.832 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:28.832 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:28.832 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:28.832 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:28.832 06:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:29.091 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.091 "name": "BaseBdev3", 00:16:29.091 "aliases": [ 00:16:29.091 "c8762f0c-361b-4bc5-9708-5e0ccdad9b7a" 00:16:29.091 ], 00:16:29.091 "product_name": "Malloc disk", 00:16:29.091 "block_size": 512, 00:16:29.091 "num_blocks": 65536, 00:16:29.091 "uuid": "c8762f0c-361b-4bc5-9708-5e0ccdad9b7a", 00:16:29.091 "assigned_rate_limits": { 00:16:29.091 "rw_ios_per_sec": 0, 00:16:29.091 "rw_mbytes_per_sec": 0, 00:16:29.091 "r_mbytes_per_sec": 0, 00:16:29.091 "w_mbytes_per_sec": 0 00:16:29.091 }, 00:16:29.091 "claimed": true, 00:16:29.091 "claim_type": "exclusive_write", 00:16:29.091 "zoned": false, 00:16:29.091 "supported_io_types": { 00:16:29.091 "read": true, 00:16:29.091 "write": true, 00:16:29.091 "unmap": true, 00:16:29.091 "flush": true, 00:16:29.091 "reset": true, 00:16:29.091 "nvme_admin": false, 00:16:29.091 "nvme_io": false, 00:16:29.091 "nvme_io_md": false, 00:16:29.091 "write_zeroes": true, 00:16:29.091 "zcopy": true, 00:16:29.091 "get_zone_info": false, 00:16:29.091 "zone_management": false, 00:16:29.091 "zone_append": false, 00:16:29.091 "compare": false, 00:16:29.091 "compare_and_write": false, 00:16:29.091 "abort": true, 00:16:29.091 "seek_hole": false, 00:16:29.091 "seek_data": false, 00:16:29.091 "copy": true, 00:16:29.091 "nvme_iov_md": false 00:16:29.091 }, 00:16:29.091 "memory_domains": [ 00:16:29.091 { 00:16:29.091 "dma_device_id": "system", 00:16:29.091 "dma_device_type": 1 00:16:29.091 }, 00:16:29.091 { 00:16:29.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.091 "dma_device_type": 2 00:16:29.091 } 00:16:29.091 ], 00:16:29.091 "driver_specific": {} 00:16:29.091 }' 00:16:29.091 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.091 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.091 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.091 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.091 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.091 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:29.091 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.350 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.350 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:29.350 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.350 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.350 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.350 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:29.350 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:29.350 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.610 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.610 "name": "BaseBdev4", 00:16:29.610 "aliases": [ 00:16:29.610 "9bb47f79-6127-4c9e-a449-7647e9501554" 00:16:29.610 ], 00:16:29.610 "product_name": "Malloc disk", 00:16:29.610 "block_size": 512, 00:16:29.610 "num_blocks": 65536, 00:16:29.610 "uuid": "9bb47f79-6127-4c9e-a449-7647e9501554", 00:16:29.610 "assigned_rate_limits": { 00:16:29.610 "rw_ios_per_sec": 0, 00:16:29.610 "rw_mbytes_per_sec": 0, 00:16:29.610 "r_mbytes_per_sec": 0, 00:16:29.610 "w_mbytes_per_sec": 0 00:16:29.610 }, 00:16:29.610 "claimed": true, 00:16:29.610 "claim_type": "exclusive_write", 00:16:29.610 "zoned": false, 00:16:29.610 "supported_io_types": { 00:16:29.610 "read": true, 00:16:29.610 "write": true, 00:16:29.610 "unmap": true, 00:16:29.610 "flush": true, 00:16:29.610 "reset": true, 00:16:29.610 "nvme_admin": false, 00:16:29.610 "nvme_io": false, 00:16:29.610 "nvme_io_md": false, 00:16:29.610 "write_zeroes": true, 00:16:29.610 "zcopy": true, 00:16:29.610 "get_zone_info": false, 00:16:29.610 "zone_management": false, 00:16:29.610 "zone_append": false, 00:16:29.610 "compare": false, 00:16:29.610 "compare_and_write": false, 00:16:29.610 "abort": true, 00:16:29.610 "seek_hole": false, 00:16:29.610 "seek_data": false, 00:16:29.610 "copy": true, 00:16:29.610 "nvme_iov_md": false 00:16:29.610 }, 00:16:29.610 "memory_domains": [ 00:16:29.610 { 00:16:29.610 "dma_device_id": "system", 00:16:29.610 "dma_device_type": 1 00:16:29.610 }, 00:16:29.610 { 00:16:29.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.610 "dma_device_type": 2 00:16:29.610 } 00:16:29.610 ], 00:16:29.610 "driver_specific": {} 00:16:29.610 }' 00:16:29.610 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.610 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.610 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.610 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.870 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.870 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:29.870 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.870 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.870 06:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:29.870 06:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.870 06:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.870 06:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.870 06:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:30.129 [2024-08-14 06:47:57.277139] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.129 [2024-08-14 06:47:57.277194] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.129 [2024-08-14 06:47:57.277303] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.129 [2024-08-14 06:47:57.277601] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.129 [2024-08-14 06:47:57.277613] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:16:30.129 06:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 89311 00:16:30.129 06:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 89311 ']' 00:16:30.129 06:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 89311 00:16:30.129 06:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:16:30.129 06:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:30.129 06:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89311 00:16:30.129 06:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:30.129 06:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:30.129 killing process with pid 89311 00:16:30.129 06:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89311' 00:16:30.129 06:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 89311 00:16:30.129 [2024-08-14 06:47:57.334972] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:30.129 06:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 89311 00:16:30.129 [2024-08-14 06:47:57.377612] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:30.389 06:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:30.389 00:16:30.389 real 0m29.085s 00:16:30.389 user 0m54.115s 00:16:30.389 sys 0m4.409s 00:16:30.389 06:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:30.389 06:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.389 ************************************ 00:16:30.389 END TEST raid_state_function_test 00:16:30.389 ************************************ 00:16:30.648 06:47:57 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:16:30.648 06:47:57 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:30.648 06:47:57 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:30.648 06:47:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:30.648 ************************************ 00:16:30.648 START TEST raid_state_function_test_sb 00:16:30.648 ************************************ 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 true 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=90327 00:16:30.648 Process raid pid: 90327 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 90327' 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 90327 /var/tmp/spdk-raid.sock 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 90327 ']' 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:30.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:30.648 06:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.648 [2024-08-14 06:47:57.783996] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:16:30.648 [2024-08-14 06:47:57.784133] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.906 [2024-08-14 06:47:57.916763] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.906 [2024-08-14 06:47:57.965443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.906 [2024-08-14 06:47:58.007888] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.906 [2024-08-14 06:47:58.007939] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.474 06:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:31.474 06:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:16:31.474 06:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:31.733 [2024-08-14 06:47:58.851562] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.733 [2024-08-14 06:47:58.851619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.733 [2024-08-14 06:47:58.851633] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.733 [2024-08-14 06:47:58.851641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.733 [2024-08-14 06:47:58.851651] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.733 [2024-08-14 06:47:58.851658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.733 [2024-08-14 06:47:58.851668] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:31.733 [2024-08-14 06:47:58.851675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:31.733 06:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:31.733 06:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:31.733 06:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:31.733 06:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:31.733 06:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:31.733 06:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:31.733 06:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.733 06:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.733 06:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.733 06:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.733 06:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.733 06:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.992 06:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:31.992 "name": "Existed_Raid", 00:16:31.992 "uuid": "96e4322a-5850-4086-8d49-9055d148cec6", 00:16:31.992 "strip_size_kb": 0, 00:16:31.992 "state": "configuring", 00:16:31.992 "raid_level": "raid1", 00:16:31.992 "superblock": true, 00:16:31.992 "num_base_bdevs": 4, 00:16:31.992 "num_base_bdevs_discovered": 0, 00:16:31.992 "num_base_bdevs_operational": 4, 00:16:31.992 "base_bdevs_list": [ 00:16:31.992 { 00:16:31.992 "name": "BaseBdev1", 00:16:31.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.992 "is_configured": false, 00:16:31.992 "data_offset": 0, 00:16:31.992 "data_size": 0 00:16:31.992 }, 00:16:31.992 { 00:16:31.992 "name": "BaseBdev2", 00:16:31.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.992 "is_configured": false, 00:16:31.992 "data_offset": 0, 00:16:31.992 "data_size": 0 00:16:31.992 }, 00:16:31.992 { 00:16:31.992 "name": "BaseBdev3", 00:16:31.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.992 "is_configured": false, 00:16:31.992 "data_offset": 0, 00:16:31.992 "data_size": 0 00:16:31.992 }, 00:16:31.992 { 00:16:31.992 "name": "BaseBdev4", 00:16:31.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.992 "is_configured": false, 00:16:31.992 "data_offset": 0, 00:16:31.992 "data_size": 0 00:16:31.992 } 00:16:31.992 ] 00:16:31.992 }' 00:16:31.992 06:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:31.992 06:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.561 06:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:32.820 [2024-08-14 06:47:59.865715] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:32.820 [2024-08-14 06:47:59.865779] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:16:32.820 06:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:33.080 [2024-08-14 06:48:00.097364] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.080 [2024-08-14 06:48:00.097413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.080 [2024-08-14 06:48:00.097427] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.080 [2024-08-14 06:48:00.097451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.080 [2024-08-14 06:48:00.097461] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:33.080 [2024-08-14 06:48:00.097468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:33.080 [2024-08-14 06:48:00.097479] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:33.080 [2024-08-14 06:48:00.097486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:33.080 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:33.080 [2024-08-14 06:48:00.326418] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.080 BaseBdev1 00:16:33.339 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:33.339 06:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:33.339 06:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:33.339 06:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:33.339 06:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:33.339 06:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:33.340 06:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.340 06:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:33.599 [ 00:16:33.599 { 00:16:33.599 "name": "BaseBdev1", 00:16:33.599 "aliases": [ 00:16:33.599 "c7c3b912-d172-42f9-8079-dda818a74d3e" 00:16:33.599 ], 00:16:33.599 "product_name": "Malloc disk", 00:16:33.599 "block_size": 512, 00:16:33.599 "num_blocks": 65536, 00:16:33.599 "uuid": "c7c3b912-d172-42f9-8079-dda818a74d3e", 00:16:33.599 "assigned_rate_limits": { 00:16:33.599 "rw_ios_per_sec": 0, 00:16:33.599 "rw_mbytes_per_sec": 0, 00:16:33.599 "r_mbytes_per_sec": 0, 00:16:33.599 "w_mbytes_per_sec": 0 00:16:33.599 }, 00:16:33.599 "claimed": true, 00:16:33.599 "claim_type": "exclusive_write", 00:16:33.599 "zoned": false, 00:16:33.599 "supported_io_types": { 00:16:33.599 "read": true, 00:16:33.599 "write": true, 00:16:33.599 "unmap": true, 00:16:33.599 "flush": true, 00:16:33.599 "reset": true, 00:16:33.599 "nvme_admin": false, 00:16:33.599 "nvme_io": false, 00:16:33.599 "nvme_io_md": false, 00:16:33.599 "write_zeroes": true, 00:16:33.599 "zcopy": true, 00:16:33.599 "get_zone_info": false, 00:16:33.599 "zone_management": false, 00:16:33.599 "zone_append": false, 00:16:33.599 "compare": false, 00:16:33.599 "compare_and_write": false, 00:16:33.599 "abort": true, 00:16:33.599 "seek_hole": false, 00:16:33.600 "seek_data": false, 00:16:33.600 "copy": true, 00:16:33.600 "nvme_iov_md": false 00:16:33.600 }, 00:16:33.600 "memory_domains": [ 00:16:33.600 { 00:16:33.600 "dma_device_id": "system", 00:16:33.600 "dma_device_type": 1 00:16:33.600 }, 00:16:33.600 { 00:16:33.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.600 "dma_device_type": 2 00:16:33.600 } 00:16:33.600 ], 00:16:33.600 "driver_specific": {} 00:16:33.600 } 00:16:33.600 ] 00:16:33.600 06:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:33.600 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:33.600 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:33.600 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:33.600 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:33.600 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:33.600 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:33.600 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.600 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.600 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.600 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.600 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.600 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.859 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.859 "name": "Existed_Raid", 00:16:33.859 "uuid": "5a486a8e-ead0-4f88-a591-f9baf17f3580", 00:16:33.859 "strip_size_kb": 0, 00:16:33.859 "state": "configuring", 00:16:33.859 "raid_level": "raid1", 00:16:33.859 "superblock": true, 00:16:33.859 "num_base_bdevs": 4, 00:16:33.859 "num_base_bdevs_discovered": 1, 00:16:33.859 "num_base_bdevs_operational": 4, 00:16:33.859 "base_bdevs_list": [ 00:16:33.859 { 00:16:33.859 "name": "BaseBdev1", 00:16:33.859 "uuid": "c7c3b912-d172-42f9-8079-dda818a74d3e", 00:16:33.859 "is_configured": true, 00:16:33.859 "data_offset": 2048, 00:16:33.859 "data_size": 63488 00:16:33.859 }, 00:16:33.859 { 00:16:33.859 "name": "BaseBdev2", 00:16:33.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.859 "is_configured": false, 00:16:33.859 "data_offset": 0, 00:16:33.859 "data_size": 0 00:16:33.859 }, 00:16:33.859 { 00:16:33.859 "name": "BaseBdev3", 00:16:33.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.859 "is_configured": false, 00:16:33.859 "data_offset": 0, 00:16:33.859 "data_size": 0 00:16:33.859 }, 00:16:33.859 { 00:16:33.859 "name": "BaseBdev4", 00:16:33.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.859 "is_configured": false, 00:16:33.859 "data_offset": 0, 00:16:33.859 "data_size": 0 00:16:33.859 } 00:16:33.859 ] 00:16:33.859 }' 00:16:33.859 06:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.859 06:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.427 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:34.687 [2024-08-14 06:48:01.772091] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.687 [2024-08-14 06:48:01.772159] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:16:34.687 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:34.947 [2024-08-14 06:48:01.963828] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.947 [2024-08-14 06:48:01.965782] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.947 [2024-08-14 06:48:01.965825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.947 [2024-08-14 06:48:01.965838] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:34.947 [2024-08-14 06:48:01.965845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:34.947 [2024-08-14 06:48:01.965857] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:34.947 [2024-08-14 06:48:01.965864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:34.947 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:34.947 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:34.947 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:34.947 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:34.947 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:34.947 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:34.947 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:34.947 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:34.947 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.947 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.947 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.947 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.947 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.947 06:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.947 06:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.947 "name": "Existed_Raid", 00:16:34.947 "uuid": "03eb13ba-891a-436a-8d9e-accb8fbd4e3b", 00:16:34.947 "strip_size_kb": 0, 00:16:34.947 "state": "configuring", 00:16:34.947 "raid_level": "raid1", 00:16:34.947 "superblock": true, 00:16:34.947 "num_base_bdevs": 4, 00:16:34.947 "num_base_bdevs_discovered": 1, 00:16:34.947 "num_base_bdevs_operational": 4, 00:16:34.947 "base_bdevs_list": [ 00:16:34.947 { 00:16:34.947 "name": "BaseBdev1", 00:16:34.947 "uuid": "c7c3b912-d172-42f9-8079-dda818a74d3e", 00:16:34.947 "is_configured": true, 00:16:34.947 "data_offset": 2048, 00:16:34.947 "data_size": 63488 00:16:34.947 }, 00:16:34.947 { 00:16:34.947 "name": "BaseBdev2", 00:16:34.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.947 "is_configured": false, 00:16:34.947 "data_offset": 0, 00:16:34.947 "data_size": 0 00:16:34.947 }, 00:16:34.947 { 00:16:34.947 "name": "BaseBdev3", 00:16:34.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.947 "is_configured": false, 00:16:34.947 "data_offset": 0, 00:16:34.947 "data_size": 0 00:16:34.947 }, 00:16:34.947 { 00:16:34.947 "name": "BaseBdev4", 00:16:34.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.947 "is_configured": false, 00:16:34.947 "data_offset": 0, 00:16:34.947 "data_size": 0 00:16:34.947 } 00:16:34.947 ] 00:16:34.947 }' 00:16:34.947 06:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.947 06:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.563 06:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:35.823 [2024-08-14 06:48:02.971777] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.823 BaseBdev2 00:16:35.823 06:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:35.823 06:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:35.823 06:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:35.823 06:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:35.823 06:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:35.823 06:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:35.823 06:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:36.082 06:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:36.341 [ 00:16:36.341 { 00:16:36.341 "name": "BaseBdev2", 00:16:36.341 "aliases": [ 00:16:36.341 "e1380505-a3fa-450d-8cc8-229376ac9a1a" 00:16:36.341 ], 00:16:36.341 "product_name": "Malloc disk", 00:16:36.341 "block_size": 512, 00:16:36.341 "num_blocks": 65536, 00:16:36.341 "uuid": "e1380505-a3fa-450d-8cc8-229376ac9a1a", 00:16:36.341 "assigned_rate_limits": { 00:16:36.341 "rw_ios_per_sec": 0, 00:16:36.341 "rw_mbytes_per_sec": 0, 00:16:36.341 "r_mbytes_per_sec": 0, 00:16:36.341 "w_mbytes_per_sec": 0 00:16:36.341 }, 00:16:36.341 "claimed": true, 00:16:36.341 "claim_type": "exclusive_write", 00:16:36.341 "zoned": false, 00:16:36.341 "supported_io_types": { 00:16:36.341 "read": true, 00:16:36.341 "write": true, 00:16:36.341 "unmap": true, 00:16:36.341 "flush": true, 00:16:36.341 "reset": true, 00:16:36.341 "nvme_admin": false, 00:16:36.341 "nvme_io": false, 00:16:36.341 "nvme_io_md": false, 00:16:36.341 "write_zeroes": true, 00:16:36.341 "zcopy": true, 00:16:36.341 "get_zone_info": false, 00:16:36.341 "zone_management": false, 00:16:36.341 "zone_append": false, 00:16:36.341 "compare": false, 00:16:36.341 "compare_and_write": false, 00:16:36.341 "abort": true, 00:16:36.341 "seek_hole": false, 00:16:36.341 "seek_data": false, 00:16:36.341 "copy": true, 00:16:36.341 "nvme_iov_md": false 00:16:36.341 }, 00:16:36.341 "memory_domains": [ 00:16:36.341 { 00:16:36.341 "dma_device_id": "system", 00:16:36.341 "dma_device_type": 1 00:16:36.341 }, 00:16:36.341 { 00:16:36.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.341 "dma_device_type": 2 00:16:36.341 } 00:16:36.341 ], 00:16:36.341 "driver_specific": {} 00:16:36.341 } 00:16:36.341 ] 00:16:36.341 06:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:36.341 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:36.341 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:36.341 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:36.341 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:36.341 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:36.341 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:36.341 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:36.341 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:36.341 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:36.341 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:36.341 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:36.341 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:36.341 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.341 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.601 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.601 "name": "Existed_Raid", 00:16:36.601 "uuid": "03eb13ba-891a-436a-8d9e-accb8fbd4e3b", 00:16:36.601 "strip_size_kb": 0, 00:16:36.601 "state": "configuring", 00:16:36.601 "raid_level": "raid1", 00:16:36.601 "superblock": true, 00:16:36.601 "num_base_bdevs": 4, 00:16:36.601 "num_base_bdevs_discovered": 2, 00:16:36.601 "num_base_bdevs_operational": 4, 00:16:36.601 "base_bdevs_list": [ 00:16:36.601 { 00:16:36.601 "name": "BaseBdev1", 00:16:36.601 "uuid": "c7c3b912-d172-42f9-8079-dda818a74d3e", 00:16:36.601 "is_configured": true, 00:16:36.601 "data_offset": 2048, 00:16:36.601 "data_size": 63488 00:16:36.601 }, 00:16:36.601 { 00:16:36.601 "name": "BaseBdev2", 00:16:36.601 "uuid": "e1380505-a3fa-450d-8cc8-229376ac9a1a", 00:16:36.601 "is_configured": true, 00:16:36.601 "data_offset": 2048, 00:16:36.601 "data_size": 63488 00:16:36.601 }, 00:16:36.601 { 00:16:36.601 "name": "BaseBdev3", 00:16:36.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.601 "is_configured": false, 00:16:36.601 "data_offset": 0, 00:16:36.601 "data_size": 0 00:16:36.601 }, 00:16:36.601 { 00:16:36.601 "name": "BaseBdev4", 00:16:36.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.601 "is_configured": false, 00:16:36.601 "data_offset": 0, 00:16:36.601 "data_size": 0 00:16:36.601 } 00:16:36.601 ] 00:16:36.601 }' 00:16:36.601 06:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.601 06:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.168 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:37.168 [2024-08-14 06:48:04.420460] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.168 BaseBdev3 00:16:37.427 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:37.427 06:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:37.427 06:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:37.427 06:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:37.427 06:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:37.427 06:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:37.427 06:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.427 06:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:37.686 [ 00:16:37.686 { 00:16:37.686 "name": "BaseBdev3", 00:16:37.686 "aliases": [ 00:16:37.686 "95a3a1cc-4700-4226-988a-65b1e6bfee00" 00:16:37.686 ], 00:16:37.686 "product_name": "Malloc disk", 00:16:37.686 "block_size": 512, 00:16:37.686 "num_blocks": 65536, 00:16:37.686 "uuid": "95a3a1cc-4700-4226-988a-65b1e6bfee00", 00:16:37.686 "assigned_rate_limits": { 00:16:37.686 "rw_ios_per_sec": 0, 00:16:37.686 "rw_mbytes_per_sec": 0, 00:16:37.686 "r_mbytes_per_sec": 0, 00:16:37.686 "w_mbytes_per_sec": 0 00:16:37.686 }, 00:16:37.686 "claimed": true, 00:16:37.686 "claim_type": "exclusive_write", 00:16:37.686 "zoned": false, 00:16:37.686 "supported_io_types": { 00:16:37.686 "read": true, 00:16:37.686 "write": true, 00:16:37.686 "unmap": true, 00:16:37.686 "flush": true, 00:16:37.686 "reset": true, 00:16:37.686 "nvme_admin": false, 00:16:37.686 "nvme_io": false, 00:16:37.686 "nvme_io_md": false, 00:16:37.686 "write_zeroes": true, 00:16:37.686 "zcopy": true, 00:16:37.686 "get_zone_info": false, 00:16:37.686 "zone_management": false, 00:16:37.686 "zone_append": false, 00:16:37.686 "compare": false, 00:16:37.686 "compare_and_write": false, 00:16:37.686 "abort": true, 00:16:37.686 "seek_hole": false, 00:16:37.686 "seek_data": false, 00:16:37.686 "copy": true, 00:16:37.686 "nvme_iov_md": false 00:16:37.686 }, 00:16:37.686 "memory_domains": [ 00:16:37.686 { 00:16:37.686 "dma_device_id": "system", 00:16:37.686 "dma_device_type": 1 00:16:37.686 }, 00:16:37.686 { 00:16:37.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.686 "dma_device_type": 2 00:16:37.686 } 00:16:37.686 ], 00:16:37.686 "driver_specific": {} 00:16:37.686 } 00:16:37.686 ] 00:16:37.686 06:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:37.686 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:37.686 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:37.686 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:37.686 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.686 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:37.686 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:37.686 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:37.686 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:37.686 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.686 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.686 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.686 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.686 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.686 06:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.945 06:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.945 "name": "Existed_Raid", 00:16:37.945 "uuid": "03eb13ba-891a-436a-8d9e-accb8fbd4e3b", 00:16:37.945 "strip_size_kb": 0, 00:16:37.945 "state": "configuring", 00:16:37.945 "raid_level": "raid1", 00:16:37.945 "superblock": true, 00:16:37.945 "num_base_bdevs": 4, 00:16:37.945 "num_base_bdevs_discovered": 3, 00:16:37.945 "num_base_bdevs_operational": 4, 00:16:37.945 "base_bdevs_list": [ 00:16:37.945 { 00:16:37.945 "name": "BaseBdev1", 00:16:37.945 "uuid": "c7c3b912-d172-42f9-8079-dda818a74d3e", 00:16:37.945 "is_configured": true, 00:16:37.945 "data_offset": 2048, 00:16:37.945 "data_size": 63488 00:16:37.945 }, 00:16:37.945 { 00:16:37.945 "name": "BaseBdev2", 00:16:37.945 "uuid": "e1380505-a3fa-450d-8cc8-229376ac9a1a", 00:16:37.945 "is_configured": true, 00:16:37.945 "data_offset": 2048, 00:16:37.945 "data_size": 63488 00:16:37.945 }, 00:16:37.945 { 00:16:37.945 "name": "BaseBdev3", 00:16:37.945 "uuid": "95a3a1cc-4700-4226-988a-65b1e6bfee00", 00:16:37.945 "is_configured": true, 00:16:37.945 "data_offset": 2048, 00:16:37.945 "data_size": 63488 00:16:37.945 }, 00:16:37.945 { 00:16:37.945 "name": "BaseBdev4", 00:16:37.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.945 "is_configured": false, 00:16:37.945 "data_offset": 0, 00:16:37.945 "data_size": 0 00:16:37.945 } 00:16:37.945 ] 00:16:37.945 }' 00:16:37.945 06:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.945 06:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.512 06:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:38.771 [2024-08-14 06:48:05.857382] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:38.771 [2024-08-14 06:48:05.857721] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:38.771 [2024-08-14 06:48:05.857772] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:38.771 [2024-08-14 06:48:05.858086] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:38.771 [2024-08-14 06:48:05.858326] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:38.771 [2024-08-14 06:48:05.858377] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:16:38.771 [2024-08-14 06:48:05.858569] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.771 BaseBdev4 00:16:38.771 06:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:38.771 06:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:16:38.771 06:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:38.771 06:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:38.771 06:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:38.771 06:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:38.771 06:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:39.029 06:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:39.288 [ 00:16:39.288 { 00:16:39.288 "name": "BaseBdev4", 00:16:39.288 "aliases": [ 00:16:39.288 "6a01124f-4105-41d4-9dba-0cf4260d221c" 00:16:39.288 ], 00:16:39.288 "product_name": "Malloc disk", 00:16:39.288 "block_size": 512, 00:16:39.288 "num_blocks": 65536, 00:16:39.288 "uuid": "6a01124f-4105-41d4-9dba-0cf4260d221c", 00:16:39.288 "assigned_rate_limits": { 00:16:39.288 "rw_ios_per_sec": 0, 00:16:39.288 "rw_mbytes_per_sec": 0, 00:16:39.288 "r_mbytes_per_sec": 0, 00:16:39.288 "w_mbytes_per_sec": 0 00:16:39.288 }, 00:16:39.288 "claimed": true, 00:16:39.288 "claim_type": "exclusive_write", 00:16:39.288 "zoned": false, 00:16:39.288 "supported_io_types": { 00:16:39.288 "read": true, 00:16:39.288 "write": true, 00:16:39.288 "unmap": true, 00:16:39.288 "flush": true, 00:16:39.288 "reset": true, 00:16:39.288 "nvme_admin": false, 00:16:39.288 "nvme_io": false, 00:16:39.288 "nvme_io_md": false, 00:16:39.288 "write_zeroes": true, 00:16:39.288 "zcopy": true, 00:16:39.288 "get_zone_info": false, 00:16:39.288 "zone_management": false, 00:16:39.288 "zone_append": false, 00:16:39.288 "compare": false, 00:16:39.288 "compare_and_write": false, 00:16:39.288 "abort": true, 00:16:39.288 "seek_hole": false, 00:16:39.288 "seek_data": false, 00:16:39.288 "copy": true, 00:16:39.288 "nvme_iov_md": false 00:16:39.288 }, 00:16:39.288 "memory_domains": [ 00:16:39.288 { 00:16:39.288 "dma_device_id": "system", 00:16:39.288 "dma_device_type": 1 00:16:39.288 }, 00:16:39.288 { 00:16:39.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.288 "dma_device_type": 2 00:16:39.288 } 00:16:39.288 ], 00:16:39.288 "driver_specific": {} 00:16:39.288 } 00:16:39.288 ] 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.288 "name": "Existed_Raid", 00:16:39.288 "uuid": "03eb13ba-891a-436a-8d9e-accb8fbd4e3b", 00:16:39.288 "strip_size_kb": 0, 00:16:39.288 "state": "online", 00:16:39.288 "raid_level": "raid1", 00:16:39.288 "superblock": true, 00:16:39.288 "num_base_bdevs": 4, 00:16:39.288 "num_base_bdevs_discovered": 4, 00:16:39.288 "num_base_bdevs_operational": 4, 00:16:39.288 "base_bdevs_list": [ 00:16:39.288 { 00:16:39.288 "name": "BaseBdev1", 00:16:39.288 "uuid": "c7c3b912-d172-42f9-8079-dda818a74d3e", 00:16:39.288 "is_configured": true, 00:16:39.288 "data_offset": 2048, 00:16:39.288 "data_size": 63488 00:16:39.288 }, 00:16:39.288 { 00:16:39.288 "name": "BaseBdev2", 00:16:39.288 "uuid": "e1380505-a3fa-450d-8cc8-229376ac9a1a", 00:16:39.288 "is_configured": true, 00:16:39.288 "data_offset": 2048, 00:16:39.288 "data_size": 63488 00:16:39.288 }, 00:16:39.288 { 00:16:39.288 "name": "BaseBdev3", 00:16:39.288 "uuid": "95a3a1cc-4700-4226-988a-65b1e6bfee00", 00:16:39.288 "is_configured": true, 00:16:39.288 "data_offset": 2048, 00:16:39.288 "data_size": 63488 00:16:39.288 }, 00:16:39.288 { 00:16:39.288 "name": "BaseBdev4", 00:16:39.288 "uuid": "6a01124f-4105-41d4-9dba-0cf4260d221c", 00:16:39.288 "is_configured": true, 00:16:39.288 "data_offset": 2048, 00:16:39.288 "data_size": 63488 00:16:39.288 } 00:16:39.288 ] 00:16:39.288 }' 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.288 06:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.856 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:39.856 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:39.856 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:39.856 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:39.856 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:39.856 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:39.856 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:39.856 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:40.114 [2024-08-14 06:48:07.287407] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.114 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:40.114 "name": "Existed_Raid", 00:16:40.114 "aliases": [ 00:16:40.114 "03eb13ba-891a-436a-8d9e-accb8fbd4e3b" 00:16:40.114 ], 00:16:40.114 "product_name": "Raid Volume", 00:16:40.114 "block_size": 512, 00:16:40.114 "num_blocks": 63488, 00:16:40.114 "uuid": "03eb13ba-891a-436a-8d9e-accb8fbd4e3b", 00:16:40.114 "assigned_rate_limits": { 00:16:40.114 "rw_ios_per_sec": 0, 00:16:40.114 "rw_mbytes_per_sec": 0, 00:16:40.114 "r_mbytes_per_sec": 0, 00:16:40.114 "w_mbytes_per_sec": 0 00:16:40.114 }, 00:16:40.114 "claimed": false, 00:16:40.114 "zoned": false, 00:16:40.114 "supported_io_types": { 00:16:40.114 "read": true, 00:16:40.114 "write": true, 00:16:40.114 "unmap": false, 00:16:40.114 "flush": false, 00:16:40.114 "reset": true, 00:16:40.114 "nvme_admin": false, 00:16:40.114 "nvme_io": false, 00:16:40.114 "nvme_io_md": false, 00:16:40.114 "write_zeroes": true, 00:16:40.114 "zcopy": false, 00:16:40.114 "get_zone_info": false, 00:16:40.114 "zone_management": false, 00:16:40.114 "zone_append": false, 00:16:40.114 "compare": false, 00:16:40.114 "compare_and_write": false, 00:16:40.114 "abort": false, 00:16:40.114 "seek_hole": false, 00:16:40.114 "seek_data": false, 00:16:40.114 "copy": false, 00:16:40.114 "nvme_iov_md": false 00:16:40.114 }, 00:16:40.114 "memory_domains": [ 00:16:40.114 { 00:16:40.114 "dma_device_id": "system", 00:16:40.114 "dma_device_type": 1 00:16:40.114 }, 00:16:40.114 { 00:16:40.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.114 "dma_device_type": 2 00:16:40.114 }, 00:16:40.114 { 00:16:40.114 "dma_device_id": "system", 00:16:40.114 "dma_device_type": 1 00:16:40.114 }, 00:16:40.114 { 00:16:40.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.114 "dma_device_type": 2 00:16:40.114 }, 00:16:40.114 { 00:16:40.114 "dma_device_id": "system", 00:16:40.114 "dma_device_type": 1 00:16:40.114 }, 00:16:40.114 { 00:16:40.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.115 "dma_device_type": 2 00:16:40.115 }, 00:16:40.115 { 00:16:40.115 "dma_device_id": "system", 00:16:40.115 "dma_device_type": 1 00:16:40.115 }, 00:16:40.115 { 00:16:40.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.115 "dma_device_type": 2 00:16:40.115 } 00:16:40.115 ], 00:16:40.115 "driver_specific": { 00:16:40.115 "raid": { 00:16:40.115 "uuid": "03eb13ba-891a-436a-8d9e-accb8fbd4e3b", 00:16:40.115 "strip_size_kb": 0, 00:16:40.115 "state": "online", 00:16:40.115 "raid_level": "raid1", 00:16:40.115 "superblock": true, 00:16:40.115 "num_base_bdevs": 4, 00:16:40.115 "num_base_bdevs_discovered": 4, 00:16:40.115 "num_base_bdevs_operational": 4, 00:16:40.115 "base_bdevs_list": [ 00:16:40.115 { 00:16:40.115 "name": "BaseBdev1", 00:16:40.115 "uuid": "c7c3b912-d172-42f9-8079-dda818a74d3e", 00:16:40.115 "is_configured": true, 00:16:40.115 "data_offset": 2048, 00:16:40.115 "data_size": 63488 00:16:40.115 }, 00:16:40.115 { 00:16:40.115 "name": "BaseBdev2", 00:16:40.115 "uuid": "e1380505-a3fa-450d-8cc8-229376ac9a1a", 00:16:40.115 "is_configured": true, 00:16:40.115 "data_offset": 2048, 00:16:40.115 "data_size": 63488 00:16:40.115 }, 00:16:40.115 { 00:16:40.115 "name": "BaseBdev3", 00:16:40.115 "uuid": "95a3a1cc-4700-4226-988a-65b1e6bfee00", 00:16:40.115 "is_configured": true, 00:16:40.115 "data_offset": 2048, 00:16:40.115 "data_size": 63488 00:16:40.115 }, 00:16:40.115 { 00:16:40.115 "name": "BaseBdev4", 00:16:40.115 "uuid": "6a01124f-4105-41d4-9dba-0cf4260d221c", 00:16:40.115 "is_configured": true, 00:16:40.115 "data_offset": 2048, 00:16:40.115 "data_size": 63488 00:16:40.115 } 00:16:40.115 ] 00:16:40.115 } 00:16:40.115 } 00:16:40.115 }' 00:16:40.115 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:40.115 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:40.115 BaseBdev2 00:16:40.115 BaseBdev3 00:16:40.115 BaseBdev4' 00:16:40.115 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:40.115 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:40.115 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:40.373 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:40.373 "name": "BaseBdev1", 00:16:40.373 "aliases": [ 00:16:40.373 "c7c3b912-d172-42f9-8079-dda818a74d3e" 00:16:40.373 ], 00:16:40.373 "product_name": "Malloc disk", 00:16:40.373 "block_size": 512, 00:16:40.373 "num_blocks": 65536, 00:16:40.373 "uuid": "c7c3b912-d172-42f9-8079-dda818a74d3e", 00:16:40.373 "assigned_rate_limits": { 00:16:40.373 "rw_ios_per_sec": 0, 00:16:40.373 "rw_mbytes_per_sec": 0, 00:16:40.373 "r_mbytes_per_sec": 0, 00:16:40.373 "w_mbytes_per_sec": 0 00:16:40.373 }, 00:16:40.373 "claimed": true, 00:16:40.373 "claim_type": "exclusive_write", 00:16:40.373 "zoned": false, 00:16:40.373 "supported_io_types": { 00:16:40.373 "read": true, 00:16:40.373 "write": true, 00:16:40.373 "unmap": true, 00:16:40.373 "flush": true, 00:16:40.373 "reset": true, 00:16:40.373 "nvme_admin": false, 00:16:40.373 "nvme_io": false, 00:16:40.373 "nvme_io_md": false, 00:16:40.373 "write_zeroes": true, 00:16:40.373 "zcopy": true, 00:16:40.373 "get_zone_info": false, 00:16:40.373 "zone_management": false, 00:16:40.373 "zone_append": false, 00:16:40.373 "compare": false, 00:16:40.373 "compare_and_write": false, 00:16:40.373 "abort": true, 00:16:40.373 "seek_hole": false, 00:16:40.373 "seek_data": false, 00:16:40.373 "copy": true, 00:16:40.373 "nvme_iov_md": false 00:16:40.373 }, 00:16:40.373 "memory_domains": [ 00:16:40.373 { 00:16:40.373 "dma_device_id": "system", 00:16:40.373 "dma_device_type": 1 00:16:40.373 }, 00:16:40.373 { 00:16:40.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.373 "dma_device_type": 2 00:16:40.373 } 00:16:40.373 ], 00:16:40.373 "driver_specific": {} 00:16:40.373 }' 00:16:40.373 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:40.373 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:40.632 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:40.632 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:40.632 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:40.632 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:40.632 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:40.632 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:40.632 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:40.632 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:40.632 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:40.890 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:40.890 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:40.890 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:40.890 06:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:40.890 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:40.890 "name": "BaseBdev2", 00:16:40.890 "aliases": [ 00:16:40.890 "e1380505-a3fa-450d-8cc8-229376ac9a1a" 00:16:40.890 ], 00:16:40.890 "product_name": "Malloc disk", 00:16:40.890 "block_size": 512, 00:16:40.890 "num_blocks": 65536, 00:16:40.890 "uuid": "e1380505-a3fa-450d-8cc8-229376ac9a1a", 00:16:40.890 "assigned_rate_limits": { 00:16:40.890 "rw_ios_per_sec": 0, 00:16:40.890 "rw_mbytes_per_sec": 0, 00:16:40.890 "r_mbytes_per_sec": 0, 00:16:40.890 "w_mbytes_per_sec": 0 00:16:40.890 }, 00:16:40.890 "claimed": true, 00:16:40.890 "claim_type": "exclusive_write", 00:16:40.890 "zoned": false, 00:16:40.890 "supported_io_types": { 00:16:40.890 "read": true, 00:16:40.890 "write": true, 00:16:40.890 "unmap": true, 00:16:40.890 "flush": true, 00:16:40.890 "reset": true, 00:16:40.890 "nvme_admin": false, 00:16:40.890 "nvme_io": false, 00:16:40.890 "nvme_io_md": false, 00:16:40.890 "write_zeroes": true, 00:16:40.890 "zcopy": true, 00:16:40.890 "get_zone_info": false, 00:16:40.890 "zone_management": false, 00:16:40.890 "zone_append": false, 00:16:40.890 "compare": false, 00:16:40.890 "compare_and_write": false, 00:16:40.890 "abort": true, 00:16:40.890 "seek_hole": false, 00:16:40.890 "seek_data": false, 00:16:40.890 "copy": true, 00:16:40.890 "nvme_iov_md": false 00:16:40.890 }, 00:16:40.890 "memory_domains": [ 00:16:40.890 { 00:16:40.890 "dma_device_id": "system", 00:16:40.890 "dma_device_type": 1 00:16:40.890 }, 00:16:40.890 { 00:16:40.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.890 "dma_device_type": 2 00:16:40.890 } 00:16:40.890 ], 00:16:40.890 "driver_specific": {} 00:16:40.890 }' 00:16:40.890 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:41.148 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:41.148 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:41.148 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:41.148 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:41.148 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:41.148 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:41.148 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:41.406 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:41.406 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:41.406 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:41.406 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:41.406 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:41.406 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:41.406 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:41.664 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:41.664 "name": "BaseBdev3", 00:16:41.664 "aliases": [ 00:16:41.664 "95a3a1cc-4700-4226-988a-65b1e6bfee00" 00:16:41.664 ], 00:16:41.664 "product_name": "Malloc disk", 00:16:41.664 "block_size": 512, 00:16:41.664 "num_blocks": 65536, 00:16:41.664 "uuid": "95a3a1cc-4700-4226-988a-65b1e6bfee00", 00:16:41.664 "assigned_rate_limits": { 00:16:41.664 "rw_ios_per_sec": 0, 00:16:41.664 "rw_mbytes_per_sec": 0, 00:16:41.664 "r_mbytes_per_sec": 0, 00:16:41.664 "w_mbytes_per_sec": 0 00:16:41.664 }, 00:16:41.664 "claimed": true, 00:16:41.664 "claim_type": "exclusive_write", 00:16:41.664 "zoned": false, 00:16:41.664 "supported_io_types": { 00:16:41.664 "read": true, 00:16:41.664 "write": true, 00:16:41.664 "unmap": true, 00:16:41.664 "flush": true, 00:16:41.664 "reset": true, 00:16:41.664 "nvme_admin": false, 00:16:41.664 "nvme_io": false, 00:16:41.664 "nvme_io_md": false, 00:16:41.664 "write_zeroes": true, 00:16:41.664 "zcopy": true, 00:16:41.664 "get_zone_info": false, 00:16:41.664 "zone_management": false, 00:16:41.664 "zone_append": false, 00:16:41.664 "compare": false, 00:16:41.664 "compare_and_write": false, 00:16:41.664 "abort": true, 00:16:41.664 "seek_hole": false, 00:16:41.664 "seek_data": false, 00:16:41.664 "copy": true, 00:16:41.664 "nvme_iov_md": false 00:16:41.664 }, 00:16:41.664 "memory_domains": [ 00:16:41.664 { 00:16:41.664 "dma_device_id": "system", 00:16:41.664 "dma_device_type": 1 00:16:41.664 }, 00:16:41.664 { 00:16:41.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.664 "dma_device_type": 2 00:16:41.664 } 00:16:41.664 ], 00:16:41.664 "driver_specific": {} 00:16:41.664 }' 00:16:41.664 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:41.664 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:41.664 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:41.664 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:41.664 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:41.664 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:41.664 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:41.922 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:41.922 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:41.922 06:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:41.922 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:41.922 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:41.922 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:41.922 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:41.922 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:42.181 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:42.181 "name": "BaseBdev4", 00:16:42.181 "aliases": [ 00:16:42.181 "6a01124f-4105-41d4-9dba-0cf4260d221c" 00:16:42.181 ], 00:16:42.181 "product_name": "Malloc disk", 00:16:42.181 "block_size": 512, 00:16:42.181 "num_blocks": 65536, 00:16:42.181 "uuid": "6a01124f-4105-41d4-9dba-0cf4260d221c", 00:16:42.181 "assigned_rate_limits": { 00:16:42.181 "rw_ios_per_sec": 0, 00:16:42.181 "rw_mbytes_per_sec": 0, 00:16:42.181 "r_mbytes_per_sec": 0, 00:16:42.181 "w_mbytes_per_sec": 0 00:16:42.181 }, 00:16:42.181 "claimed": true, 00:16:42.181 "claim_type": "exclusive_write", 00:16:42.181 "zoned": false, 00:16:42.181 "supported_io_types": { 00:16:42.181 "read": true, 00:16:42.181 "write": true, 00:16:42.181 "unmap": true, 00:16:42.181 "flush": true, 00:16:42.181 "reset": true, 00:16:42.181 "nvme_admin": false, 00:16:42.181 "nvme_io": false, 00:16:42.181 "nvme_io_md": false, 00:16:42.181 "write_zeroes": true, 00:16:42.181 "zcopy": true, 00:16:42.181 "get_zone_info": false, 00:16:42.181 "zone_management": false, 00:16:42.181 "zone_append": false, 00:16:42.181 "compare": false, 00:16:42.181 "compare_and_write": false, 00:16:42.181 "abort": true, 00:16:42.181 "seek_hole": false, 00:16:42.181 "seek_data": false, 00:16:42.181 "copy": true, 00:16:42.181 "nvme_iov_md": false 00:16:42.181 }, 00:16:42.181 "memory_domains": [ 00:16:42.181 { 00:16:42.181 "dma_device_id": "system", 00:16:42.181 "dma_device_type": 1 00:16:42.181 }, 00:16:42.181 { 00:16:42.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.181 "dma_device_type": 2 00:16:42.181 } 00:16:42.181 ], 00:16:42.181 "driver_specific": {} 00:16:42.181 }' 00:16:42.181 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:42.181 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:42.181 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:42.181 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:42.181 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:42.440 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:42.440 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:42.440 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:42.440 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:42.440 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:42.440 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:42.440 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:42.440 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:42.699 [2024-08-14 06:48:09.794956] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.699 06:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.958 06:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.958 "name": "Existed_Raid", 00:16:42.958 "uuid": "03eb13ba-891a-436a-8d9e-accb8fbd4e3b", 00:16:42.958 "strip_size_kb": 0, 00:16:42.958 "state": "online", 00:16:42.958 "raid_level": "raid1", 00:16:42.958 "superblock": true, 00:16:42.958 "num_base_bdevs": 4, 00:16:42.958 "num_base_bdevs_discovered": 3, 00:16:42.958 "num_base_bdevs_operational": 3, 00:16:42.958 "base_bdevs_list": [ 00:16:42.958 { 00:16:42.958 "name": null, 00:16:42.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.958 "is_configured": false, 00:16:42.958 "data_offset": 2048, 00:16:42.958 "data_size": 63488 00:16:42.958 }, 00:16:42.958 { 00:16:42.958 "name": "BaseBdev2", 00:16:42.958 "uuid": "e1380505-a3fa-450d-8cc8-229376ac9a1a", 00:16:42.958 "is_configured": true, 00:16:42.958 "data_offset": 2048, 00:16:42.958 "data_size": 63488 00:16:42.958 }, 00:16:42.958 { 00:16:42.958 "name": "BaseBdev3", 00:16:42.958 "uuid": "95a3a1cc-4700-4226-988a-65b1e6bfee00", 00:16:42.958 "is_configured": true, 00:16:42.958 "data_offset": 2048, 00:16:42.958 "data_size": 63488 00:16:42.958 }, 00:16:42.958 { 00:16:42.958 "name": "BaseBdev4", 00:16:42.958 "uuid": "6a01124f-4105-41d4-9dba-0cf4260d221c", 00:16:42.958 "is_configured": true, 00:16:42.958 "data_offset": 2048, 00:16:42.958 "data_size": 63488 00:16:42.958 } 00:16:42.958 ] 00:16:42.958 }' 00:16:42.958 06:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.958 06:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.524 06:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:43.524 06:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:43.524 06:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.524 06:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:43.524 06:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:43.524 06:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:43.524 06:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:43.782 [2024-08-14 06:48:10.904813] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:43.782 06:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:43.782 06:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:43.782 06:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.782 06:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:44.039 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:44.039 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:44.039 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:44.298 [2024-08-14 06:48:11.331562] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:44.298 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:44.298 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:44.298 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.298 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:44.556 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:44.556 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:44.556 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:44.556 [2024-08-14 06:48:11.758251] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:44.556 [2024-08-14 06:48:11.758383] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.556 [2024-08-14 06:48:11.769969] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.556 [2024-08-14 06:48:11.770025] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.556 [2024-08-14 06:48:11.770036] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:16:44.556 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:44.556 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:44.556 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.556 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:44.814 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:44.814 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:44.814 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:44.814 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:44.814 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:44.814 06:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:45.072 BaseBdev2 00:16:45.072 06:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:45.072 06:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:45.072 06:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:45.072 06:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:45.072 06:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:45.072 06:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:45.072 06:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:45.330 06:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:45.330 [ 00:16:45.330 { 00:16:45.330 "name": "BaseBdev2", 00:16:45.330 "aliases": [ 00:16:45.330 "91f5682c-7f50-43a4-bdfe-f2d84bf26b9e" 00:16:45.330 ], 00:16:45.330 "product_name": "Malloc disk", 00:16:45.330 "block_size": 512, 00:16:45.330 "num_blocks": 65536, 00:16:45.330 "uuid": "91f5682c-7f50-43a4-bdfe-f2d84bf26b9e", 00:16:45.330 "assigned_rate_limits": { 00:16:45.330 "rw_ios_per_sec": 0, 00:16:45.330 "rw_mbytes_per_sec": 0, 00:16:45.330 "r_mbytes_per_sec": 0, 00:16:45.330 "w_mbytes_per_sec": 0 00:16:45.330 }, 00:16:45.330 "claimed": false, 00:16:45.330 "zoned": false, 00:16:45.330 "supported_io_types": { 00:16:45.330 "read": true, 00:16:45.330 "write": true, 00:16:45.330 "unmap": true, 00:16:45.330 "flush": true, 00:16:45.330 "reset": true, 00:16:45.330 "nvme_admin": false, 00:16:45.330 "nvme_io": false, 00:16:45.330 "nvme_io_md": false, 00:16:45.330 "write_zeroes": true, 00:16:45.330 "zcopy": true, 00:16:45.330 "get_zone_info": false, 00:16:45.330 "zone_management": false, 00:16:45.330 "zone_append": false, 00:16:45.330 "compare": false, 00:16:45.330 "compare_and_write": false, 00:16:45.330 "abort": true, 00:16:45.330 "seek_hole": false, 00:16:45.330 "seek_data": false, 00:16:45.330 "copy": true, 00:16:45.330 "nvme_iov_md": false 00:16:45.330 }, 00:16:45.330 "memory_domains": [ 00:16:45.330 { 00:16:45.330 "dma_device_id": "system", 00:16:45.330 "dma_device_type": 1 00:16:45.330 }, 00:16:45.330 { 00:16:45.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.330 "dma_device_type": 2 00:16:45.330 } 00:16:45.330 ], 00:16:45.330 "driver_specific": {} 00:16:45.330 } 00:16:45.330 ] 00:16:45.330 06:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:45.330 06:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:45.330 06:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:45.330 06:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:45.594 BaseBdev3 00:16:45.595 06:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:45.595 06:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:45.595 06:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:45.595 06:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:45.595 06:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:45.595 06:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:45.595 06:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:45.863 06:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:46.121 [ 00:16:46.121 { 00:16:46.121 "name": "BaseBdev3", 00:16:46.121 "aliases": [ 00:16:46.121 "01a3dd30-96fe-4f04-ad67-f6ad01a70c1e" 00:16:46.121 ], 00:16:46.121 "product_name": "Malloc disk", 00:16:46.121 "block_size": 512, 00:16:46.121 "num_blocks": 65536, 00:16:46.121 "uuid": "01a3dd30-96fe-4f04-ad67-f6ad01a70c1e", 00:16:46.121 "assigned_rate_limits": { 00:16:46.121 "rw_ios_per_sec": 0, 00:16:46.121 "rw_mbytes_per_sec": 0, 00:16:46.121 "r_mbytes_per_sec": 0, 00:16:46.121 "w_mbytes_per_sec": 0 00:16:46.121 }, 00:16:46.121 "claimed": false, 00:16:46.121 "zoned": false, 00:16:46.121 "supported_io_types": { 00:16:46.121 "read": true, 00:16:46.121 "write": true, 00:16:46.121 "unmap": true, 00:16:46.121 "flush": true, 00:16:46.121 "reset": true, 00:16:46.121 "nvme_admin": false, 00:16:46.121 "nvme_io": false, 00:16:46.121 "nvme_io_md": false, 00:16:46.121 "write_zeroes": true, 00:16:46.121 "zcopy": true, 00:16:46.121 "get_zone_info": false, 00:16:46.121 "zone_management": false, 00:16:46.121 "zone_append": false, 00:16:46.121 "compare": false, 00:16:46.121 "compare_and_write": false, 00:16:46.121 "abort": true, 00:16:46.121 "seek_hole": false, 00:16:46.121 "seek_data": false, 00:16:46.121 "copy": true, 00:16:46.121 "nvme_iov_md": false 00:16:46.121 }, 00:16:46.121 "memory_domains": [ 00:16:46.121 { 00:16:46.121 "dma_device_id": "system", 00:16:46.121 "dma_device_type": 1 00:16:46.121 }, 00:16:46.121 { 00:16:46.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.121 "dma_device_type": 2 00:16:46.121 } 00:16:46.121 ], 00:16:46.121 "driver_specific": {} 00:16:46.121 } 00:16:46.121 ] 00:16:46.121 06:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:46.121 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:46.121 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:46.121 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:46.121 BaseBdev4 00:16:46.122 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:46.122 06:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:16:46.122 06:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:46.122 06:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:46.122 06:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:46.122 06:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:46.122 06:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:46.379 06:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:46.645 [ 00:16:46.645 { 00:16:46.645 "name": "BaseBdev4", 00:16:46.645 "aliases": [ 00:16:46.645 "f17ebd59-e8b7-4334-8164-3c9a0495c6df" 00:16:46.645 ], 00:16:46.645 "product_name": "Malloc disk", 00:16:46.645 "block_size": 512, 00:16:46.645 "num_blocks": 65536, 00:16:46.645 "uuid": "f17ebd59-e8b7-4334-8164-3c9a0495c6df", 00:16:46.645 "assigned_rate_limits": { 00:16:46.645 "rw_ios_per_sec": 0, 00:16:46.645 "rw_mbytes_per_sec": 0, 00:16:46.645 "r_mbytes_per_sec": 0, 00:16:46.645 "w_mbytes_per_sec": 0 00:16:46.645 }, 00:16:46.645 "claimed": false, 00:16:46.645 "zoned": false, 00:16:46.645 "supported_io_types": { 00:16:46.645 "read": true, 00:16:46.645 "write": true, 00:16:46.645 "unmap": true, 00:16:46.645 "flush": true, 00:16:46.645 "reset": true, 00:16:46.645 "nvme_admin": false, 00:16:46.645 "nvme_io": false, 00:16:46.645 "nvme_io_md": false, 00:16:46.645 "write_zeroes": true, 00:16:46.645 "zcopy": true, 00:16:46.645 "get_zone_info": false, 00:16:46.645 "zone_management": false, 00:16:46.645 "zone_append": false, 00:16:46.645 "compare": false, 00:16:46.646 "compare_and_write": false, 00:16:46.646 "abort": true, 00:16:46.646 "seek_hole": false, 00:16:46.646 "seek_data": false, 00:16:46.646 "copy": true, 00:16:46.646 "nvme_iov_md": false 00:16:46.646 }, 00:16:46.646 "memory_domains": [ 00:16:46.646 { 00:16:46.646 "dma_device_id": "system", 00:16:46.646 "dma_device_type": 1 00:16:46.646 }, 00:16:46.646 { 00:16:46.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.646 "dma_device_type": 2 00:16:46.646 } 00:16:46.646 ], 00:16:46.646 "driver_specific": {} 00:16:46.646 } 00:16:46.646 ] 00:16:46.646 06:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:46.646 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:46.646 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:46.646 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:46.904 [2024-08-14 06:48:13.932898] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.904 [2024-08-14 06:48:13.932952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.904 [2024-08-14 06:48:13.932981] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.904 [2024-08-14 06:48:13.934925] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.904 [2024-08-14 06:48:13.935059] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:46.904 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:46.904 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:46.904 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:46.904 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:46.904 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:46.904 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:46.904 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:46.904 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:46.904 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:46.904 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:46.904 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.905 06:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.162 06:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.162 "name": "Existed_Raid", 00:16:47.162 "uuid": "2fc2712c-a6b9-40d2-a15f-e1b79814438f", 00:16:47.162 "strip_size_kb": 0, 00:16:47.162 "state": "configuring", 00:16:47.162 "raid_level": "raid1", 00:16:47.162 "superblock": true, 00:16:47.162 "num_base_bdevs": 4, 00:16:47.162 "num_base_bdevs_discovered": 3, 00:16:47.162 "num_base_bdevs_operational": 4, 00:16:47.162 "base_bdevs_list": [ 00:16:47.162 { 00:16:47.162 "name": "BaseBdev1", 00:16:47.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.162 "is_configured": false, 00:16:47.162 "data_offset": 0, 00:16:47.162 "data_size": 0 00:16:47.162 }, 00:16:47.162 { 00:16:47.162 "name": "BaseBdev2", 00:16:47.162 "uuid": "91f5682c-7f50-43a4-bdfe-f2d84bf26b9e", 00:16:47.162 "is_configured": true, 00:16:47.162 "data_offset": 2048, 00:16:47.162 "data_size": 63488 00:16:47.162 }, 00:16:47.162 { 00:16:47.162 "name": "BaseBdev3", 00:16:47.162 "uuid": "01a3dd30-96fe-4f04-ad67-f6ad01a70c1e", 00:16:47.162 "is_configured": true, 00:16:47.162 "data_offset": 2048, 00:16:47.162 "data_size": 63488 00:16:47.162 }, 00:16:47.162 { 00:16:47.162 "name": "BaseBdev4", 00:16:47.162 "uuid": "f17ebd59-e8b7-4334-8164-3c9a0495c6df", 00:16:47.162 "is_configured": true, 00:16:47.162 "data_offset": 2048, 00:16:47.162 "data_size": 63488 00:16:47.162 } 00:16:47.162 ] 00:16:47.162 }' 00:16:47.162 06:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.162 06:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.729 06:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:47.729 [2024-08-14 06:48:14.951138] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:47.729 06:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:47.729 06:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:47.729 06:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:47.729 06:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:47.729 06:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:47.729 06:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:47.729 06:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.729 06:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.729 06:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.729 06:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.729 06:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.729 06:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.987 06:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.987 "name": "Existed_Raid", 00:16:47.987 "uuid": "2fc2712c-a6b9-40d2-a15f-e1b79814438f", 00:16:47.987 "strip_size_kb": 0, 00:16:47.987 "state": "configuring", 00:16:47.987 "raid_level": "raid1", 00:16:47.987 "superblock": true, 00:16:47.987 "num_base_bdevs": 4, 00:16:47.987 "num_base_bdevs_discovered": 2, 00:16:47.987 "num_base_bdevs_operational": 4, 00:16:47.987 "base_bdevs_list": [ 00:16:47.987 { 00:16:47.987 "name": "BaseBdev1", 00:16:47.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.987 "is_configured": false, 00:16:47.987 "data_offset": 0, 00:16:47.987 "data_size": 0 00:16:47.987 }, 00:16:47.987 { 00:16:47.987 "name": null, 00:16:47.987 "uuid": "91f5682c-7f50-43a4-bdfe-f2d84bf26b9e", 00:16:47.987 "is_configured": false, 00:16:47.987 "data_offset": 2048, 00:16:47.987 "data_size": 63488 00:16:47.987 }, 00:16:47.987 { 00:16:47.987 "name": "BaseBdev3", 00:16:47.987 "uuid": "01a3dd30-96fe-4f04-ad67-f6ad01a70c1e", 00:16:47.987 "is_configured": true, 00:16:47.987 "data_offset": 2048, 00:16:47.987 "data_size": 63488 00:16:47.987 }, 00:16:47.987 { 00:16:47.987 "name": "BaseBdev4", 00:16:47.987 "uuid": "f17ebd59-e8b7-4334-8164-3c9a0495c6df", 00:16:47.987 "is_configured": true, 00:16:47.987 "data_offset": 2048, 00:16:47.987 "data_size": 63488 00:16:47.987 } 00:16:47.987 ] 00:16:47.987 }' 00:16:47.987 06:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.987 06:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.553 06:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:48.553 06:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.838 06:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:48.838 06:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:49.098 BaseBdev1 00:16:49.098 [2024-08-14 06:48:16.200282] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.098 06:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:49.098 06:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:49.098 06:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:49.098 06:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:49.098 06:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:49.098 06:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:49.098 06:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:49.357 06:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:49.357 [ 00:16:49.357 { 00:16:49.357 "name": "BaseBdev1", 00:16:49.357 "aliases": [ 00:16:49.357 "f806470f-066d-407f-9013-4dbc326d75f4" 00:16:49.357 ], 00:16:49.357 "product_name": "Malloc disk", 00:16:49.357 "block_size": 512, 00:16:49.357 "num_blocks": 65536, 00:16:49.357 "uuid": "f806470f-066d-407f-9013-4dbc326d75f4", 00:16:49.357 "assigned_rate_limits": { 00:16:49.357 "rw_ios_per_sec": 0, 00:16:49.357 "rw_mbytes_per_sec": 0, 00:16:49.357 "r_mbytes_per_sec": 0, 00:16:49.357 "w_mbytes_per_sec": 0 00:16:49.357 }, 00:16:49.357 "claimed": true, 00:16:49.357 "claim_type": "exclusive_write", 00:16:49.357 "zoned": false, 00:16:49.357 "supported_io_types": { 00:16:49.357 "read": true, 00:16:49.357 "write": true, 00:16:49.357 "unmap": true, 00:16:49.357 "flush": true, 00:16:49.357 "reset": true, 00:16:49.357 "nvme_admin": false, 00:16:49.357 "nvme_io": false, 00:16:49.357 "nvme_io_md": false, 00:16:49.357 "write_zeroes": true, 00:16:49.357 "zcopy": true, 00:16:49.357 "get_zone_info": false, 00:16:49.357 "zone_management": false, 00:16:49.357 "zone_append": false, 00:16:49.357 "compare": false, 00:16:49.357 "compare_and_write": false, 00:16:49.357 "abort": true, 00:16:49.357 "seek_hole": false, 00:16:49.357 "seek_data": false, 00:16:49.357 "copy": true, 00:16:49.357 "nvme_iov_md": false 00:16:49.357 }, 00:16:49.357 "memory_domains": [ 00:16:49.357 { 00:16:49.357 "dma_device_id": "system", 00:16:49.357 "dma_device_type": 1 00:16:49.357 }, 00:16:49.357 { 00:16:49.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.357 "dma_device_type": 2 00:16:49.357 } 00:16:49.357 ], 00:16:49.357 "driver_specific": {} 00:16:49.357 } 00:16:49.357 ] 00:16:49.357 06:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:49.357 06:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:49.357 06:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:49.357 06:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:49.357 06:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:49.357 06:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:49.357 06:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:49.357 06:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:49.357 06:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:49.357 06:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:49.357 06:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:49.357 06:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.357 06:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.617 06:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.617 "name": "Existed_Raid", 00:16:49.617 "uuid": "2fc2712c-a6b9-40d2-a15f-e1b79814438f", 00:16:49.617 "strip_size_kb": 0, 00:16:49.617 "state": "configuring", 00:16:49.617 "raid_level": "raid1", 00:16:49.617 "superblock": true, 00:16:49.617 "num_base_bdevs": 4, 00:16:49.617 "num_base_bdevs_discovered": 3, 00:16:49.617 "num_base_bdevs_operational": 4, 00:16:49.617 "base_bdevs_list": [ 00:16:49.617 { 00:16:49.617 "name": "BaseBdev1", 00:16:49.617 "uuid": "f806470f-066d-407f-9013-4dbc326d75f4", 00:16:49.617 "is_configured": true, 00:16:49.617 "data_offset": 2048, 00:16:49.617 "data_size": 63488 00:16:49.617 }, 00:16:49.617 { 00:16:49.617 "name": null, 00:16:49.617 "uuid": "91f5682c-7f50-43a4-bdfe-f2d84bf26b9e", 00:16:49.617 "is_configured": false, 00:16:49.617 "data_offset": 2048, 00:16:49.617 "data_size": 63488 00:16:49.617 }, 00:16:49.617 { 00:16:49.617 "name": "BaseBdev3", 00:16:49.617 "uuid": "01a3dd30-96fe-4f04-ad67-f6ad01a70c1e", 00:16:49.617 "is_configured": true, 00:16:49.617 "data_offset": 2048, 00:16:49.617 "data_size": 63488 00:16:49.617 }, 00:16:49.617 { 00:16:49.617 "name": "BaseBdev4", 00:16:49.617 "uuid": "f17ebd59-e8b7-4334-8164-3c9a0495c6df", 00:16:49.617 "is_configured": true, 00:16:49.617 "data_offset": 2048, 00:16:49.617 "data_size": 63488 00:16:49.617 } 00:16:49.617 ] 00:16:49.617 }' 00:16:49.617 06:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.617 06:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.187 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:50.187 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.446 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:50.446 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:50.706 [2024-08-14 06:48:17.753887] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:50.706 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:50.706 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:50.706 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:50.706 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:50.706 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:50.706 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:50.706 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.706 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.706 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.706 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.706 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.706 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.966 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.966 "name": "Existed_Raid", 00:16:50.966 "uuid": "2fc2712c-a6b9-40d2-a15f-e1b79814438f", 00:16:50.966 "strip_size_kb": 0, 00:16:50.966 "state": "configuring", 00:16:50.966 "raid_level": "raid1", 00:16:50.966 "superblock": true, 00:16:50.966 "num_base_bdevs": 4, 00:16:50.966 "num_base_bdevs_discovered": 2, 00:16:50.966 "num_base_bdevs_operational": 4, 00:16:50.966 "base_bdevs_list": [ 00:16:50.966 { 00:16:50.966 "name": "BaseBdev1", 00:16:50.966 "uuid": "f806470f-066d-407f-9013-4dbc326d75f4", 00:16:50.966 "is_configured": true, 00:16:50.966 "data_offset": 2048, 00:16:50.966 "data_size": 63488 00:16:50.966 }, 00:16:50.966 { 00:16:50.966 "name": null, 00:16:50.966 "uuid": "91f5682c-7f50-43a4-bdfe-f2d84bf26b9e", 00:16:50.966 "is_configured": false, 00:16:50.966 "data_offset": 2048, 00:16:50.966 "data_size": 63488 00:16:50.966 }, 00:16:50.966 { 00:16:50.966 "name": null, 00:16:50.966 "uuid": "01a3dd30-96fe-4f04-ad67-f6ad01a70c1e", 00:16:50.966 "is_configured": false, 00:16:50.966 "data_offset": 2048, 00:16:50.966 "data_size": 63488 00:16:50.966 }, 00:16:50.966 { 00:16:50.966 "name": "BaseBdev4", 00:16:50.966 "uuid": "f17ebd59-e8b7-4334-8164-3c9a0495c6df", 00:16:50.966 "is_configured": true, 00:16:50.966 "data_offset": 2048, 00:16:50.966 "data_size": 63488 00:16:50.966 } 00:16:50.966 ] 00:16:50.966 }' 00:16:50.966 06:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.966 06:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.555 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.555 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:51.555 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:51.555 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:51.815 [2024-08-14 06:48:18.872065] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:51.815 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:51.815 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:51.815 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:51.815 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:51.815 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:51.815 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:51.815 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.815 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.815 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.815 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.815 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.815 06:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.075 06:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:52.075 "name": "Existed_Raid", 00:16:52.075 "uuid": "2fc2712c-a6b9-40d2-a15f-e1b79814438f", 00:16:52.075 "strip_size_kb": 0, 00:16:52.075 "state": "configuring", 00:16:52.075 "raid_level": "raid1", 00:16:52.075 "superblock": true, 00:16:52.075 "num_base_bdevs": 4, 00:16:52.075 "num_base_bdevs_discovered": 3, 00:16:52.075 "num_base_bdevs_operational": 4, 00:16:52.075 "base_bdevs_list": [ 00:16:52.075 { 00:16:52.075 "name": "BaseBdev1", 00:16:52.075 "uuid": "f806470f-066d-407f-9013-4dbc326d75f4", 00:16:52.075 "is_configured": true, 00:16:52.075 "data_offset": 2048, 00:16:52.075 "data_size": 63488 00:16:52.075 }, 00:16:52.075 { 00:16:52.075 "name": null, 00:16:52.075 "uuid": "91f5682c-7f50-43a4-bdfe-f2d84bf26b9e", 00:16:52.075 "is_configured": false, 00:16:52.075 "data_offset": 2048, 00:16:52.075 "data_size": 63488 00:16:52.075 }, 00:16:52.075 { 00:16:52.075 "name": "BaseBdev3", 00:16:52.075 "uuid": "01a3dd30-96fe-4f04-ad67-f6ad01a70c1e", 00:16:52.075 "is_configured": true, 00:16:52.075 "data_offset": 2048, 00:16:52.075 "data_size": 63488 00:16:52.075 }, 00:16:52.075 { 00:16:52.075 "name": "BaseBdev4", 00:16:52.075 "uuid": "f17ebd59-e8b7-4334-8164-3c9a0495c6df", 00:16:52.075 "is_configured": true, 00:16:52.075 "data_offset": 2048, 00:16:52.075 "data_size": 63488 00:16:52.075 } 00:16:52.075 ] 00:16:52.075 }' 00:16:52.075 06:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:52.075 06:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.644 06:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.644 06:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:52.644 06:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:52.644 06:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:52.903 [2024-08-14 06:48:20.086194] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:52.903 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:52.903 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:52.903 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:52.903 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:52.903 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:52.903 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:52.903 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:52.903 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:52.903 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:52.903 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:52.903 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.903 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.163 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:53.163 "name": "Existed_Raid", 00:16:53.163 "uuid": "2fc2712c-a6b9-40d2-a15f-e1b79814438f", 00:16:53.163 "strip_size_kb": 0, 00:16:53.163 "state": "configuring", 00:16:53.163 "raid_level": "raid1", 00:16:53.163 "superblock": true, 00:16:53.163 "num_base_bdevs": 4, 00:16:53.163 "num_base_bdevs_discovered": 2, 00:16:53.163 "num_base_bdevs_operational": 4, 00:16:53.163 "base_bdevs_list": [ 00:16:53.163 { 00:16:53.163 "name": null, 00:16:53.163 "uuid": "f806470f-066d-407f-9013-4dbc326d75f4", 00:16:53.163 "is_configured": false, 00:16:53.163 "data_offset": 2048, 00:16:53.163 "data_size": 63488 00:16:53.163 }, 00:16:53.163 { 00:16:53.163 "name": null, 00:16:53.163 "uuid": "91f5682c-7f50-43a4-bdfe-f2d84bf26b9e", 00:16:53.163 "is_configured": false, 00:16:53.163 "data_offset": 2048, 00:16:53.163 "data_size": 63488 00:16:53.163 }, 00:16:53.163 { 00:16:53.163 "name": "BaseBdev3", 00:16:53.163 "uuid": "01a3dd30-96fe-4f04-ad67-f6ad01a70c1e", 00:16:53.163 "is_configured": true, 00:16:53.163 "data_offset": 2048, 00:16:53.163 "data_size": 63488 00:16:53.163 }, 00:16:53.163 { 00:16:53.163 "name": "BaseBdev4", 00:16:53.163 "uuid": "f17ebd59-e8b7-4334-8164-3c9a0495c6df", 00:16:53.163 "is_configured": true, 00:16:53.163 "data_offset": 2048, 00:16:53.163 "data_size": 63488 00:16:53.163 } 00:16:53.163 ] 00:16:53.163 }' 00:16:53.163 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:53.163 06:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.732 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.732 06:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:53.992 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:53.992 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:53.992 [2024-08-14 06:48:21.192845] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.992 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:53.992 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:53.992 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:53.992 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:53.992 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:53.992 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:53.992 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.992 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.992 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.992 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.992 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.992 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.252 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.252 "name": "Existed_Raid", 00:16:54.252 "uuid": "2fc2712c-a6b9-40d2-a15f-e1b79814438f", 00:16:54.252 "strip_size_kb": 0, 00:16:54.252 "state": "configuring", 00:16:54.252 "raid_level": "raid1", 00:16:54.252 "superblock": true, 00:16:54.252 "num_base_bdevs": 4, 00:16:54.252 "num_base_bdevs_discovered": 3, 00:16:54.252 "num_base_bdevs_operational": 4, 00:16:54.252 "base_bdevs_list": [ 00:16:54.252 { 00:16:54.252 "name": null, 00:16:54.252 "uuid": "f806470f-066d-407f-9013-4dbc326d75f4", 00:16:54.252 "is_configured": false, 00:16:54.252 "data_offset": 2048, 00:16:54.252 "data_size": 63488 00:16:54.252 }, 00:16:54.252 { 00:16:54.252 "name": "BaseBdev2", 00:16:54.252 "uuid": "91f5682c-7f50-43a4-bdfe-f2d84bf26b9e", 00:16:54.252 "is_configured": true, 00:16:54.252 "data_offset": 2048, 00:16:54.252 "data_size": 63488 00:16:54.252 }, 00:16:54.252 { 00:16:54.252 "name": "BaseBdev3", 00:16:54.252 "uuid": "01a3dd30-96fe-4f04-ad67-f6ad01a70c1e", 00:16:54.252 "is_configured": true, 00:16:54.252 "data_offset": 2048, 00:16:54.252 "data_size": 63488 00:16:54.252 }, 00:16:54.252 { 00:16:54.252 "name": "BaseBdev4", 00:16:54.252 "uuid": "f17ebd59-e8b7-4334-8164-3c9a0495c6df", 00:16:54.252 "is_configured": true, 00:16:54.252 "data_offset": 2048, 00:16:54.252 "data_size": 63488 00:16:54.252 } 00:16:54.252 ] 00:16:54.252 }' 00:16:54.252 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.252 06:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.821 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:54.821 06:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.081 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:55.081 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.081 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:55.081 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f806470f-066d-407f-9013-4dbc326d75f4 00:16:55.340 [2024-08-14 06:48:22.503427] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:55.340 [2024-08-14 06:48:22.503766] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:16:55.340 [2024-08-14 06:48:22.503818] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:55.340 [2024-08-14 06:48:22.504151] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:16:55.340 [2024-08-14 06:48:22.504350] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:16:55.340 [2024-08-14 06:48:22.504398] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:16:55.340 NewBaseBdev 00:16:55.340 [2024-08-14 06:48:22.504549] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.340 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:55.340 06:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:16:55.340 06:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:55.340 06:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:55.340 06:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:55.340 06:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:55.340 06:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:55.599 06:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:55.859 [ 00:16:55.859 { 00:16:55.859 "name": "NewBaseBdev", 00:16:55.859 "aliases": [ 00:16:55.859 "f806470f-066d-407f-9013-4dbc326d75f4" 00:16:55.859 ], 00:16:55.859 "product_name": "Malloc disk", 00:16:55.859 "block_size": 512, 00:16:55.859 "num_blocks": 65536, 00:16:55.859 "uuid": "f806470f-066d-407f-9013-4dbc326d75f4", 00:16:55.859 "assigned_rate_limits": { 00:16:55.859 "rw_ios_per_sec": 0, 00:16:55.859 "rw_mbytes_per_sec": 0, 00:16:55.859 "r_mbytes_per_sec": 0, 00:16:55.859 "w_mbytes_per_sec": 0 00:16:55.859 }, 00:16:55.859 "claimed": true, 00:16:55.859 "claim_type": "exclusive_write", 00:16:55.859 "zoned": false, 00:16:55.859 "supported_io_types": { 00:16:55.859 "read": true, 00:16:55.859 "write": true, 00:16:55.859 "unmap": true, 00:16:55.859 "flush": true, 00:16:55.859 "reset": true, 00:16:55.859 "nvme_admin": false, 00:16:55.859 "nvme_io": false, 00:16:55.859 "nvme_io_md": false, 00:16:55.859 "write_zeroes": true, 00:16:55.859 "zcopy": true, 00:16:55.859 "get_zone_info": false, 00:16:55.859 "zone_management": false, 00:16:55.859 "zone_append": false, 00:16:55.859 "compare": false, 00:16:55.859 "compare_and_write": false, 00:16:55.859 "abort": true, 00:16:55.859 "seek_hole": false, 00:16:55.859 "seek_data": false, 00:16:55.859 "copy": true, 00:16:55.859 "nvme_iov_md": false 00:16:55.859 }, 00:16:55.859 "memory_domains": [ 00:16:55.859 { 00:16:55.859 "dma_device_id": "system", 00:16:55.859 "dma_device_type": 1 00:16:55.859 }, 00:16:55.859 { 00:16:55.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.859 "dma_device_type": 2 00:16:55.859 } 00:16:55.859 ], 00:16:55.859 "driver_specific": {} 00:16:55.859 } 00:16:55.859 ] 00:16:55.859 06:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:55.859 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:55.859 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:55.859 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:55.859 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:55.859 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:55.859 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:55.859 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.859 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.859 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.859 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.859 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.859 06:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.859 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:55.859 "name": "Existed_Raid", 00:16:55.859 "uuid": "2fc2712c-a6b9-40d2-a15f-e1b79814438f", 00:16:55.859 "strip_size_kb": 0, 00:16:55.859 "state": "online", 00:16:55.859 "raid_level": "raid1", 00:16:55.859 "superblock": true, 00:16:55.859 "num_base_bdevs": 4, 00:16:55.859 "num_base_bdevs_discovered": 4, 00:16:55.859 "num_base_bdevs_operational": 4, 00:16:55.859 "base_bdevs_list": [ 00:16:55.859 { 00:16:55.859 "name": "NewBaseBdev", 00:16:55.859 "uuid": "f806470f-066d-407f-9013-4dbc326d75f4", 00:16:55.859 "is_configured": true, 00:16:55.859 "data_offset": 2048, 00:16:55.859 "data_size": 63488 00:16:55.859 }, 00:16:55.859 { 00:16:55.859 "name": "BaseBdev2", 00:16:55.860 "uuid": "91f5682c-7f50-43a4-bdfe-f2d84bf26b9e", 00:16:55.860 "is_configured": true, 00:16:55.860 "data_offset": 2048, 00:16:55.860 "data_size": 63488 00:16:55.860 }, 00:16:55.860 { 00:16:55.860 "name": "BaseBdev3", 00:16:55.860 "uuid": "01a3dd30-96fe-4f04-ad67-f6ad01a70c1e", 00:16:55.860 "is_configured": true, 00:16:55.860 "data_offset": 2048, 00:16:55.860 "data_size": 63488 00:16:55.860 }, 00:16:55.860 { 00:16:55.860 "name": "BaseBdev4", 00:16:55.860 "uuid": "f17ebd59-e8b7-4334-8164-3c9a0495c6df", 00:16:55.860 "is_configured": true, 00:16:55.860 "data_offset": 2048, 00:16:55.860 "data_size": 63488 00:16:55.860 } 00:16:55.860 ] 00:16:55.860 }' 00:16:55.860 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:55.860 06:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.429 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:56.429 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:56.429 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:56.429 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:56.429 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:56.429 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:56.429 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:56.429 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:56.688 [2024-08-14 06:48:23.837752] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.688 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:56.688 "name": "Existed_Raid", 00:16:56.688 "aliases": [ 00:16:56.688 "2fc2712c-a6b9-40d2-a15f-e1b79814438f" 00:16:56.688 ], 00:16:56.688 "product_name": "Raid Volume", 00:16:56.688 "block_size": 512, 00:16:56.688 "num_blocks": 63488, 00:16:56.688 "uuid": "2fc2712c-a6b9-40d2-a15f-e1b79814438f", 00:16:56.688 "assigned_rate_limits": { 00:16:56.688 "rw_ios_per_sec": 0, 00:16:56.688 "rw_mbytes_per_sec": 0, 00:16:56.688 "r_mbytes_per_sec": 0, 00:16:56.688 "w_mbytes_per_sec": 0 00:16:56.688 }, 00:16:56.688 "claimed": false, 00:16:56.688 "zoned": false, 00:16:56.688 "supported_io_types": { 00:16:56.688 "read": true, 00:16:56.688 "write": true, 00:16:56.688 "unmap": false, 00:16:56.688 "flush": false, 00:16:56.688 "reset": true, 00:16:56.688 "nvme_admin": false, 00:16:56.688 "nvme_io": false, 00:16:56.688 "nvme_io_md": false, 00:16:56.688 "write_zeroes": true, 00:16:56.688 "zcopy": false, 00:16:56.688 "get_zone_info": false, 00:16:56.688 "zone_management": false, 00:16:56.688 "zone_append": false, 00:16:56.688 "compare": false, 00:16:56.688 "compare_and_write": false, 00:16:56.688 "abort": false, 00:16:56.688 "seek_hole": false, 00:16:56.688 "seek_data": false, 00:16:56.688 "copy": false, 00:16:56.688 "nvme_iov_md": false 00:16:56.688 }, 00:16:56.688 "memory_domains": [ 00:16:56.688 { 00:16:56.688 "dma_device_id": "system", 00:16:56.688 "dma_device_type": 1 00:16:56.688 }, 00:16:56.688 { 00:16:56.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.688 "dma_device_type": 2 00:16:56.688 }, 00:16:56.688 { 00:16:56.688 "dma_device_id": "system", 00:16:56.688 "dma_device_type": 1 00:16:56.688 }, 00:16:56.688 { 00:16:56.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.688 "dma_device_type": 2 00:16:56.688 }, 00:16:56.688 { 00:16:56.688 "dma_device_id": "system", 00:16:56.688 "dma_device_type": 1 00:16:56.688 }, 00:16:56.688 { 00:16:56.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.688 "dma_device_type": 2 00:16:56.688 }, 00:16:56.688 { 00:16:56.688 "dma_device_id": "system", 00:16:56.688 "dma_device_type": 1 00:16:56.688 }, 00:16:56.688 { 00:16:56.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.688 "dma_device_type": 2 00:16:56.688 } 00:16:56.688 ], 00:16:56.688 "driver_specific": { 00:16:56.688 "raid": { 00:16:56.688 "uuid": "2fc2712c-a6b9-40d2-a15f-e1b79814438f", 00:16:56.688 "strip_size_kb": 0, 00:16:56.688 "state": "online", 00:16:56.688 "raid_level": "raid1", 00:16:56.688 "superblock": true, 00:16:56.688 "num_base_bdevs": 4, 00:16:56.688 "num_base_bdevs_discovered": 4, 00:16:56.688 "num_base_bdevs_operational": 4, 00:16:56.688 "base_bdevs_list": [ 00:16:56.688 { 00:16:56.688 "name": "NewBaseBdev", 00:16:56.688 "uuid": "f806470f-066d-407f-9013-4dbc326d75f4", 00:16:56.688 "is_configured": true, 00:16:56.688 "data_offset": 2048, 00:16:56.688 "data_size": 63488 00:16:56.688 }, 00:16:56.688 { 00:16:56.688 "name": "BaseBdev2", 00:16:56.688 "uuid": "91f5682c-7f50-43a4-bdfe-f2d84bf26b9e", 00:16:56.688 "is_configured": true, 00:16:56.688 "data_offset": 2048, 00:16:56.688 "data_size": 63488 00:16:56.688 }, 00:16:56.688 { 00:16:56.688 "name": "BaseBdev3", 00:16:56.688 "uuid": "01a3dd30-96fe-4f04-ad67-f6ad01a70c1e", 00:16:56.688 "is_configured": true, 00:16:56.688 "data_offset": 2048, 00:16:56.688 "data_size": 63488 00:16:56.688 }, 00:16:56.688 { 00:16:56.688 "name": "BaseBdev4", 00:16:56.688 "uuid": "f17ebd59-e8b7-4334-8164-3c9a0495c6df", 00:16:56.688 "is_configured": true, 00:16:56.688 "data_offset": 2048, 00:16:56.688 "data_size": 63488 00:16:56.688 } 00:16:56.688 ] 00:16:56.688 } 00:16:56.688 } 00:16:56.688 }' 00:16:56.688 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:56.688 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:56.688 BaseBdev2 00:16:56.688 BaseBdev3 00:16:56.688 BaseBdev4' 00:16:56.689 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:56.689 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:56.689 06:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:56.948 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:56.948 "name": "NewBaseBdev", 00:16:56.948 "aliases": [ 00:16:56.948 "f806470f-066d-407f-9013-4dbc326d75f4" 00:16:56.948 ], 00:16:56.948 "product_name": "Malloc disk", 00:16:56.948 "block_size": 512, 00:16:56.948 "num_blocks": 65536, 00:16:56.948 "uuid": "f806470f-066d-407f-9013-4dbc326d75f4", 00:16:56.948 "assigned_rate_limits": { 00:16:56.948 "rw_ios_per_sec": 0, 00:16:56.948 "rw_mbytes_per_sec": 0, 00:16:56.948 "r_mbytes_per_sec": 0, 00:16:56.948 "w_mbytes_per_sec": 0 00:16:56.948 }, 00:16:56.948 "claimed": true, 00:16:56.948 "claim_type": "exclusive_write", 00:16:56.948 "zoned": false, 00:16:56.948 "supported_io_types": { 00:16:56.948 "read": true, 00:16:56.948 "write": true, 00:16:56.948 "unmap": true, 00:16:56.948 "flush": true, 00:16:56.948 "reset": true, 00:16:56.948 "nvme_admin": false, 00:16:56.948 "nvme_io": false, 00:16:56.948 "nvme_io_md": false, 00:16:56.948 "write_zeroes": true, 00:16:56.948 "zcopy": true, 00:16:56.948 "get_zone_info": false, 00:16:56.948 "zone_management": false, 00:16:56.948 "zone_append": false, 00:16:56.948 "compare": false, 00:16:56.948 "compare_and_write": false, 00:16:56.948 "abort": true, 00:16:56.948 "seek_hole": false, 00:16:56.948 "seek_data": false, 00:16:56.948 "copy": true, 00:16:56.948 "nvme_iov_md": false 00:16:56.948 }, 00:16:56.948 "memory_domains": [ 00:16:56.948 { 00:16:56.948 "dma_device_id": "system", 00:16:56.948 "dma_device_type": 1 00:16:56.948 }, 00:16:56.948 { 00:16:56.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.948 "dma_device_type": 2 00:16:56.948 } 00:16:56.948 ], 00:16:56.948 "driver_specific": {} 00:16:56.948 }' 00:16:56.948 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:56.948 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:56.948 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:56.948 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:57.208 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:57.208 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:57.208 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:57.208 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:57.208 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:57.208 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:57.208 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:57.467 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:57.467 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:57.467 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:57.467 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:57.467 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:57.467 "name": "BaseBdev2", 00:16:57.467 "aliases": [ 00:16:57.467 "91f5682c-7f50-43a4-bdfe-f2d84bf26b9e" 00:16:57.467 ], 00:16:57.467 "product_name": "Malloc disk", 00:16:57.467 "block_size": 512, 00:16:57.467 "num_blocks": 65536, 00:16:57.467 "uuid": "91f5682c-7f50-43a4-bdfe-f2d84bf26b9e", 00:16:57.467 "assigned_rate_limits": { 00:16:57.467 "rw_ios_per_sec": 0, 00:16:57.467 "rw_mbytes_per_sec": 0, 00:16:57.467 "r_mbytes_per_sec": 0, 00:16:57.467 "w_mbytes_per_sec": 0 00:16:57.467 }, 00:16:57.467 "claimed": true, 00:16:57.467 "claim_type": "exclusive_write", 00:16:57.467 "zoned": false, 00:16:57.467 "supported_io_types": { 00:16:57.467 "read": true, 00:16:57.467 "write": true, 00:16:57.467 "unmap": true, 00:16:57.467 "flush": true, 00:16:57.467 "reset": true, 00:16:57.467 "nvme_admin": false, 00:16:57.467 "nvme_io": false, 00:16:57.467 "nvme_io_md": false, 00:16:57.467 "write_zeroes": true, 00:16:57.467 "zcopy": true, 00:16:57.467 "get_zone_info": false, 00:16:57.467 "zone_management": false, 00:16:57.467 "zone_append": false, 00:16:57.467 "compare": false, 00:16:57.467 "compare_and_write": false, 00:16:57.467 "abort": true, 00:16:57.467 "seek_hole": false, 00:16:57.467 "seek_data": false, 00:16:57.467 "copy": true, 00:16:57.467 "nvme_iov_md": false 00:16:57.467 }, 00:16:57.467 "memory_domains": [ 00:16:57.467 { 00:16:57.467 "dma_device_id": "system", 00:16:57.467 "dma_device_type": 1 00:16:57.467 }, 00:16:57.467 { 00:16:57.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.467 "dma_device_type": 2 00:16:57.467 } 00:16:57.467 ], 00:16:57.467 "driver_specific": {} 00:16:57.467 }' 00:16:57.467 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:57.467 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:57.727 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:57.727 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:57.727 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:57.727 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:57.727 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:57.727 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:57.727 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:57.727 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:57.727 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:57.986 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:57.986 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:57.986 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:57.986 06:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:57.986 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:57.986 "name": "BaseBdev3", 00:16:57.986 "aliases": [ 00:16:57.986 "01a3dd30-96fe-4f04-ad67-f6ad01a70c1e" 00:16:57.986 ], 00:16:57.986 "product_name": "Malloc disk", 00:16:57.986 "block_size": 512, 00:16:57.986 "num_blocks": 65536, 00:16:57.986 "uuid": "01a3dd30-96fe-4f04-ad67-f6ad01a70c1e", 00:16:57.986 "assigned_rate_limits": { 00:16:57.986 "rw_ios_per_sec": 0, 00:16:57.986 "rw_mbytes_per_sec": 0, 00:16:57.986 "r_mbytes_per_sec": 0, 00:16:57.986 "w_mbytes_per_sec": 0 00:16:57.986 }, 00:16:57.986 "claimed": true, 00:16:57.986 "claim_type": "exclusive_write", 00:16:57.986 "zoned": false, 00:16:57.986 "supported_io_types": { 00:16:57.986 "read": true, 00:16:57.986 "write": true, 00:16:57.986 "unmap": true, 00:16:57.986 "flush": true, 00:16:57.986 "reset": true, 00:16:57.986 "nvme_admin": false, 00:16:57.986 "nvme_io": false, 00:16:57.986 "nvme_io_md": false, 00:16:57.986 "write_zeroes": true, 00:16:57.986 "zcopy": true, 00:16:57.986 "get_zone_info": false, 00:16:57.986 "zone_management": false, 00:16:57.986 "zone_append": false, 00:16:57.986 "compare": false, 00:16:57.986 "compare_and_write": false, 00:16:57.986 "abort": true, 00:16:57.986 "seek_hole": false, 00:16:57.986 "seek_data": false, 00:16:57.986 "copy": true, 00:16:57.986 "nvme_iov_md": false 00:16:57.986 }, 00:16:57.986 "memory_domains": [ 00:16:57.986 { 00:16:57.986 "dma_device_id": "system", 00:16:57.986 "dma_device_type": 1 00:16:57.986 }, 00:16:57.986 { 00:16:57.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.986 "dma_device_type": 2 00:16:57.986 } 00:16:57.986 ], 00:16:57.986 "driver_specific": {} 00:16:57.986 }' 00:16:57.986 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.245 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.245 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:58.245 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.245 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.245 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:58.245 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.245 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.245 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:58.245 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.505 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.505 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:58.505 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:58.505 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:58.505 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:58.505 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:58.505 "name": "BaseBdev4", 00:16:58.505 "aliases": [ 00:16:58.505 "f17ebd59-e8b7-4334-8164-3c9a0495c6df" 00:16:58.505 ], 00:16:58.505 "product_name": "Malloc disk", 00:16:58.505 "block_size": 512, 00:16:58.505 "num_blocks": 65536, 00:16:58.505 "uuid": "f17ebd59-e8b7-4334-8164-3c9a0495c6df", 00:16:58.505 "assigned_rate_limits": { 00:16:58.505 "rw_ios_per_sec": 0, 00:16:58.505 "rw_mbytes_per_sec": 0, 00:16:58.505 "r_mbytes_per_sec": 0, 00:16:58.505 "w_mbytes_per_sec": 0 00:16:58.505 }, 00:16:58.505 "claimed": true, 00:16:58.505 "claim_type": "exclusive_write", 00:16:58.505 "zoned": false, 00:16:58.505 "supported_io_types": { 00:16:58.505 "read": true, 00:16:58.505 "write": true, 00:16:58.505 "unmap": true, 00:16:58.505 "flush": true, 00:16:58.505 "reset": true, 00:16:58.505 "nvme_admin": false, 00:16:58.505 "nvme_io": false, 00:16:58.505 "nvme_io_md": false, 00:16:58.505 "write_zeroes": true, 00:16:58.505 "zcopy": true, 00:16:58.505 "get_zone_info": false, 00:16:58.505 "zone_management": false, 00:16:58.505 "zone_append": false, 00:16:58.505 "compare": false, 00:16:58.505 "compare_and_write": false, 00:16:58.505 "abort": true, 00:16:58.505 "seek_hole": false, 00:16:58.505 "seek_data": false, 00:16:58.505 "copy": true, 00:16:58.505 "nvme_iov_md": false 00:16:58.505 }, 00:16:58.505 "memory_domains": [ 00:16:58.505 { 00:16:58.505 "dma_device_id": "system", 00:16:58.505 "dma_device_type": 1 00:16:58.505 }, 00:16:58.505 { 00:16:58.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.505 "dma_device_type": 2 00:16:58.505 } 00:16:58.505 ], 00:16:58.505 "driver_specific": {} 00:16:58.505 }' 00:16:58.505 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.764 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.764 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:58.764 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.764 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.764 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:58.764 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.764 06:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.024 06:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:59.024 06:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.024 06:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.024 06:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:59.024 06:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:59.024 [2024-08-14 06:48:26.273341] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:59.024 [2024-08-14 06:48:26.273403] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.024 [2024-08-14 06:48:26.273550] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.024 [2024-08-14 06:48:26.273906] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.024 [2024-08-14 06:48:26.273922] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:16:59.284 06:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 90327 00:16:59.284 06:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 90327 ']' 00:16:59.284 06:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 90327 00:16:59.284 06:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:16:59.284 06:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:59.284 06:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90327 00:16:59.284 06:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:59.284 06:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:59.284 killing process with pid 90327 00:16:59.284 06:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90327' 00:16:59.284 06:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 90327 00:16:59.284 [2024-08-14 06:48:26.337066] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:59.284 06:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 90327 00:16:59.284 [2024-08-14 06:48:26.417786] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:59.544 06:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:59.544 00:16:59.544 real 0m29.098s 00:16:59.544 user 0m54.107s 00:16:59.544 sys 0m4.332s 00:16:59.544 06:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:59.544 ************************************ 00:16:59.544 END TEST raid_state_function_test_sb 00:16:59.544 ************************************ 00:16:59.544 06:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.804 06:48:26 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:16:59.804 06:48:26 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:59.804 06:48:26 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:59.804 06:48:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:59.804 ************************************ 00:16:59.804 START TEST raid_superblock_test 00:16:59.804 ************************************ 00:16:59.804 06:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 4 00:16:59.804 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=91352 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 91352 /var/tmp/spdk-raid.sock 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 91352 ']' 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:59.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:59.805 06:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.805 [2024-08-14 06:48:26.952704] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:16:59.805 [2024-08-14 06:48:26.952829] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91352 ] 00:17:00.064 [2024-08-14 06:48:27.102800] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.064 [2024-08-14 06:48:27.183295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.064 [2024-08-14 06:48:27.262943] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.064 [2024-08-14 06:48:27.262986] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.633 06:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:00.633 06:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:17:00.633 06:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:17:00.633 06:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:00.633 06:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:17:00.633 06:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:17:00.633 06:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:00.633 06:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:00.633 06:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:00.633 06:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:00.633 06:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:00.893 malloc1 00:17:00.893 06:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:01.152 [2024-08-14 06:48:28.177094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:01.152 [2024-08-14 06:48:28.177358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.152 [2024-08-14 06:48:28.177416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:01.152 [2024-08-14 06:48:28.177450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.152 [2024-08-14 06:48:28.180322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.152 [2024-08-14 06:48:28.180404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:01.152 pt1 00:17:01.152 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:01.152 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:01.152 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:17:01.152 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:17:01.152 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:01.152 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.152 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.152 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.152 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:01.412 malloc2 00:17:01.412 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.412 [2024-08-14 06:48:28.624188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.412 [2024-08-14 06:48:28.624399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.412 [2024-08-14 06:48:28.624444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:01.412 [2024-08-14 06:48:28.624474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.412 [2024-08-14 06:48:28.627125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.412 [2024-08-14 06:48:28.627231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.412 pt2 00:17:01.412 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:01.412 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:01.412 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:17:01.412 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:17:01.412 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:01.412 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.412 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.412 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.412 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:01.672 malloc3 00:17:01.672 06:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:01.932 [2024-08-14 06:48:29.106032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:01.932 [2024-08-14 06:48:29.106144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.932 [2024-08-14 06:48:29.106195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:01.932 [2024-08-14 06:48:29.106208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.932 [2024-08-14 06:48:29.108951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.932 pt3 00:17:01.932 [2024-08-14 06:48:29.109089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:01.932 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:01.932 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:01.932 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:17:01.932 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:17:01.932 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:01.932 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.932 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.932 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.932 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:02.192 malloc4 00:17:02.192 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:02.451 [2024-08-14 06:48:29.528511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:02.451 [2024-08-14 06:48:29.528720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.451 [2024-08-14 06:48:29.528767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:02.451 [2024-08-14 06:48:29.528797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.451 [2024-08-14 06:48:29.531524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.451 [2024-08-14 06:48:29.531610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:02.451 pt4 00:17:02.451 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:02.451 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:02.451 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:02.711 [2024-08-14 06:48:29.748221] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:02.711 [2024-08-14 06:48:29.750871] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:02.711 [2024-08-14 06:48:29.751023] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:02.711 [2024-08-14 06:48:29.751100] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:02.711 [2024-08-14 06:48:29.751380] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:17:02.711 [2024-08-14 06:48:29.751429] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:02.711 [2024-08-14 06:48:29.751799] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:17:02.711 [2024-08-14 06:48:29.752031] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:17:02.711 [2024-08-14 06:48:29.752077] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:17:02.711 [2024-08-14 06:48:29.752365] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.711 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:02.711 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:02.711 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:02.711 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:02.711 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:02.711 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:02.711 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.711 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.711 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.711 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.711 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.711 06:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.970 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:02.970 "name": "raid_bdev1", 00:17:02.970 "uuid": "368a7197-f7be-4444-aab3-838adbd17810", 00:17:02.970 "strip_size_kb": 0, 00:17:02.970 "state": "online", 00:17:02.971 "raid_level": "raid1", 00:17:02.971 "superblock": true, 00:17:02.971 "num_base_bdevs": 4, 00:17:02.971 "num_base_bdevs_discovered": 4, 00:17:02.971 "num_base_bdevs_operational": 4, 00:17:02.971 "base_bdevs_list": [ 00:17:02.971 { 00:17:02.971 "name": "pt1", 00:17:02.971 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.971 "is_configured": true, 00:17:02.971 "data_offset": 2048, 00:17:02.971 "data_size": 63488 00:17:02.971 }, 00:17:02.971 { 00:17:02.971 "name": "pt2", 00:17:02.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.971 "is_configured": true, 00:17:02.971 "data_offset": 2048, 00:17:02.971 "data_size": 63488 00:17:02.971 }, 00:17:02.971 { 00:17:02.971 "name": "pt3", 00:17:02.971 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.971 "is_configured": true, 00:17:02.971 "data_offset": 2048, 00:17:02.971 "data_size": 63488 00:17:02.971 }, 00:17:02.971 { 00:17:02.971 "name": "pt4", 00:17:02.971 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:02.971 "is_configured": true, 00:17:02.971 "data_offset": 2048, 00:17:02.971 "data_size": 63488 00:17:02.971 } 00:17:02.971 ] 00:17:02.971 }' 00:17:02.971 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:02.971 06:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.538 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:17:03.538 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:03.538 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:03.538 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:03.538 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:03.538 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:03.538 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:03.538 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:03.538 [2024-08-14 06:48:30.770986] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.538 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:03.538 "name": "raid_bdev1", 00:17:03.538 "aliases": [ 00:17:03.538 "368a7197-f7be-4444-aab3-838adbd17810" 00:17:03.538 ], 00:17:03.538 "product_name": "Raid Volume", 00:17:03.538 "block_size": 512, 00:17:03.538 "num_blocks": 63488, 00:17:03.538 "uuid": "368a7197-f7be-4444-aab3-838adbd17810", 00:17:03.538 "assigned_rate_limits": { 00:17:03.538 "rw_ios_per_sec": 0, 00:17:03.538 "rw_mbytes_per_sec": 0, 00:17:03.538 "r_mbytes_per_sec": 0, 00:17:03.538 "w_mbytes_per_sec": 0 00:17:03.538 }, 00:17:03.538 "claimed": false, 00:17:03.538 "zoned": false, 00:17:03.538 "supported_io_types": { 00:17:03.538 "read": true, 00:17:03.538 "write": true, 00:17:03.538 "unmap": false, 00:17:03.538 "flush": false, 00:17:03.538 "reset": true, 00:17:03.538 "nvme_admin": false, 00:17:03.538 "nvme_io": false, 00:17:03.538 "nvme_io_md": false, 00:17:03.538 "write_zeroes": true, 00:17:03.538 "zcopy": false, 00:17:03.538 "get_zone_info": false, 00:17:03.538 "zone_management": false, 00:17:03.538 "zone_append": false, 00:17:03.539 "compare": false, 00:17:03.539 "compare_and_write": false, 00:17:03.539 "abort": false, 00:17:03.539 "seek_hole": false, 00:17:03.539 "seek_data": false, 00:17:03.539 "copy": false, 00:17:03.539 "nvme_iov_md": false 00:17:03.539 }, 00:17:03.539 "memory_domains": [ 00:17:03.539 { 00:17:03.539 "dma_device_id": "system", 00:17:03.539 "dma_device_type": 1 00:17:03.539 }, 00:17:03.539 { 00:17:03.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.539 "dma_device_type": 2 00:17:03.539 }, 00:17:03.539 { 00:17:03.539 "dma_device_id": "system", 00:17:03.539 "dma_device_type": 1 00:17:03.539 }, 00:17:03.539 { 00:17:03.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.539 "dma_device_type": 2 00:17:03.539 }, 00:17:03.539 { 00:17:03.539 "dma_device_id": "system", 00:17:03.539 "dma_device_type": 1 00:17:03.539 }, 00:17:03.539 { 00:17:03.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.539 "dma_device_type": 2 00:17:03.539 }, 00:17:03.539 { 00:17:03.539 "dma_device_id": "system", 00:17:03.539 "dma_device_type": 1 00:17:03.539 }, 00:17:03.539 { 00:17:03.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.539 "dma_device_type": 2 00:17:03.539 } 00:17:03.539 ], 00:17:03.539 "driver_specific": { 00:17:03.539 "raid": { 00:17:03.539 "uuid": "368a7197-f7be-4444-aab3-838adbd17810", 00:17:03.539 "strip_size_kb": 0, 00:17:03.539 "state": "online", 00:17:03.539 "raid_level": "raid1", 00:17:03.539 "superblock": true, 00:17:03.539 "num_base_bdevs": 4, 00:17:03.539 "num_base_bdevs_discovered": 4, 00:17:03.539 "num_base_bdevs_operational": 4, 00:17:03.539 "base_bdevs_list": [ 00:17:03.539 { 00:17:03.539 "name": "pt1", 00:17:03.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.539 "is_configured": true, 00:17:03.539 "data_offset": 2048, 00:17:03.539 "data_size": 63488 00:17:03.539 }, 00:17:03.539 { 00:17:03.539 "name": "pt2", 00:17:03.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.539 "is_configured": true, 00:17:03.539 "data_offset": 2048, 00:17:03.539 "data_size": 63488 00:17:03.539 }, 00:17:03.539 { 00:17:03.539 "name": "pt3", 00:17:03.539 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:03.539 "is_configured": true, 00:17:03.539 "data_offset": 2048, 00:17:03.539 "data_size": 63488 00:17:03.539 }, 00:17:03.539 { 00:17:03.539 "name": "pt4", 00:17:03.539 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:03.539 "is_configured": true, 00:17:03.539 "data_offset": 2048, 00:17:03.539 "data_size": 63488 00:17:03.539 } 00:17:03.539 ] 00:17:03.539 } 00:17:03.539 } 00:17:03.539 }' 00:17:03.798 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.798 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:03.798 pt2 00:17:03.798 pt3 00:17:03.798 pt4' 00:17:03.798 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:03.798 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:03.798 06:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:03.798 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:03.798 "name": "pt1", 00:17:03.798 "aliases": [ 00:17:03.798 "00000000-0000-0000-0000-000000000001" 00:17:03.798 ], 00:17:03.798 "product_name": "passthru", 00:17:03.798 "block_size": 512, 00:17:03.798 "num_blocks": 65536, 00:17:03.798 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.798 "assigned_rate_limits": { 00:17:03.798 "rw_ios_per_sec": 0, 00:17:03.798 "rw_mbytes_per_sec": 0, 00:17:03.798 "r_mbytes_per_sec": 0, 00:17:03.798 "w_mbytes_per_sec": 0 00:17:03.798 }, 00:17:03.798 "claimed": true, 00:17:03.798 "claim_type": "exclusive_write", 00:17:03.798 "zoned": false, 00:17:03.798 "supported_io_types": { 00:17:03.798 "read": true, 00:17:03.798 "write": true, 00:17:03.798 "unmap": true, 00:17:03.798 "flush": true, 00:17:03.798 "reset": true, 00:17:03.798 "nvme_admin": false, 00:17:03.798 "nvme_io": false, 00:17:03.798 "nvme_io_md": false, 00:17:03.798 "write_zeroes": true, 00:17:03.798 "zcopy": true, 00:17:03.798 "get_zone_info": false, 00:17:03.798 "zone_management": false, 00:17:03.798 "zone_append": false, 00:17:03.798 "compare": false, 00:17:03.798 "compare_and_write": false, 00:17:03.798 "abort": true, 00:17:03.798 "seek_hole": false, 00:17:03.798 "seek_data": false, 00:17:03.798 "copy": true, 00:17:03.798 "nvme_iov_md": false 00:17:03.798 }, 00:17:03.798 "memory_domains": [ 00:17:03.798 { 00:17:03.798 "dma_device_id": "system", 00:17:03.798 "dma_device_type": 1 00:17:03.798 }, 00:17:03.798 { 00:17:03.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.798 "dma_device_type": 2 00:17:03.798 } 00:17:03.798 ], 00:17:03.798 "driver_specific": { 00:17:03.798 "passthru": { 00:17:03.798 "name": "pt1", 00:17:03.798 "base_bdev_name": "malloc1" 00:17:03.798 } 00:17:03.798 } 00:17:03.798 }' 00:17:03.798 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:04.058 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:04.058 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:04.058 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:04.058 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:04.058 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:04.058 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:04.058 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:04.058 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:04.058 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:04.317 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:04.317 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:04.317 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:04.317 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:04.317 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:04.577 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:04.577 "name": "pt2", 00:17:04.577 "aliases": [ 00:17:04.577 "00000000-0000-0000-0000-000000000002" 00:17:04.577 ], 00:17:04.577 "product_name": "passthru", 00:17:04.577 "block_size": 512, 00:17:04.577 "num_blocks": 65536, 00:17:04.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.577 "assigned_rate_limits": { 00:17:04.577 "rw_ios_per_sec": 0, 00:17:04.577 "rw_mbytes_per_sec": 0, 00:17:04.577 "r_mbytes_per_sec": 0, 00:17:04.577 "w_mbytes_per_sec": 0 00:17:04.577 }, 00:17:04.577 "claimed": true, 00:17:04.577 "claim_type": "exclusive_write", 00:17:04.577 "zoned": false, 00:17:04.577 "supported_io_types": { 00:17:04.577 "read": true, 00:17:04.577 "write": true, 00:17:04.577 "unmap": true, 00:17:04.577 "flush": true, 00:17:04.577 "reset": true, 00:17:04.577 "nvme_admin": false, 00:17:04.577 "nvme_io": false, 00:17:04.577 "nvme_io_md": false, 00:17:04.577 "write_zeroes": true, 00:17:04.577 "zcopy": true, 00:17:04.577 "get_zone_info": false, 00:17:04.577 "zone_management": false, 00:17:04.577 "zone_append": false, 00:17:04.577 "compare": false, 00:17:04.577 "compare_and_write": false, 00:17:04.577 "abort": true, 00:17:04.577 "seek_hole": false, 00:17:04.577 "seek_data": false, 00:17:04.577 "copy": true, 00:17:04.577 "nvme_iov_md": false 00:17:04.577 }, 00:17:04.577 "memory_domains": [ 00:17:04.577 { 00:17:04.577 "dma_device_id": "system", 00:17:04.577 "dma_device_type": 1 00:17:04.577 }, 00:17:04.577 { 00:17:04.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.577 "dma_device_type": 2 00:17:04.577 } 00:17:04.577 ], 00:17:04.577 "driver_specific": { 00:17:04.577 "passthru": { 00:17:04.577 "name": "pt2", 00:17:04.577 "base_bdev_name": "malloc2" 00:17:04.577 } 00:17:04.577 } 00:17:04.577 }' 00:17:04.577 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:04.577 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:04.577 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:04.577 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:04.577 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:04.577 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:04.577 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:04.577 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:04.837 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:04.837 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:04.837 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:04.837 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:04.837 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:04.837 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:04.837 06:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:05.096 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:05.096 "name": "pt3", 00:17:05.096 "aliases": [ 00:17:05.096 "00000000-0000-0000-0000-000000000003" 00:17:05.096 ], 00:17:05.096 "product_name": "passthru", 00:17:05.096 "block_size": 512, 00:17:05.096 "num_blocks": 65536, 00:17:05.096 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:05.096 "assigned_rate_limits": { 00:17:05.096 "rw_ios_per_sec": 0, 00:17:05.096 "rw_mbytes_per_sec": 0, 00:17:05.096 "r_mbytes_per_sec": 0, 00:17:05.096 "w_mbytes_per_sec": 0 00:17:05.096 }, 00:17:05.096 "claimed": true, 00:17:05.096 "claim_type": "exclusive_write", 00:17:05.096 "zoned": false, 00:17:05.096 "supported_io_types": { 00:17:05.096 "read": true, 00:17:05.096 "write": true, 00:17:05.096 "unmap": true, 00:17:05.096 "flush": true, 00:17:05.096 "reset": true, 00:17:05.096 "nvme_admin": false, 00:17:05.096 "nvme_io": false, 00:17:05.096 "nvme_io_md": false, 00:17:05.096 "write_zeroes": true, 00:17:05.096 "zcopy": true, 00:17:05.096 "get_zone_info": false, 00:17:05.096 "zone_management": false, 00:17:05.096 "zone_append": false, 00:17:05.096 "compare": false, 00:17:05.096 "compare_and_write": false, 00:17:05.096 "abort": true, 00:17:05.096 "seek_hole": false, 00:17:05.096 "seek_data": false, 00:17:05.096 "copy": true, 00:17:05.096 "nvme_iov_md": false 00:17:05.096 }, 00:17:05.096 "memory_domains": [ 00:17:05.096 { 00:17:05.096 "dma_device_id": "system", 00:17:05.096 "dma_device_type": 1 00:17:05.096 }, 00:17:05.096 { 00:17:05.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.096 "dma_device_type": 2 00:17:05.096 } 00:17:05.096 ], 00:17:05.096 "driver_specific": { 00:17:05.096 "passthru": { 00:17:05.096 "name": "pt3", 00:17:05.096 "base_bdev_name": "malloc3" 00:17:05.096 } 00:17:05.096 } 00:17:05.096 }' 00:17:05.096 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.096 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.096 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:05.096 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.096 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.096 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:05.096 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.355 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.355 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:05.355 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.355 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.355 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:05.355 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:05.355 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:05.355 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:17:05.614 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:05.614 "name": "pt4", 00:17:05.614 "aliases": [ 00:17:05.614 "00000000-0000-0000-0000-000000000004" 00:17:05.614 ], 00:17:05.614 "product_name": "passthru", 00:17:05.614 "block_size": 512, 00:17:05.614 "num_blocks": 65536, 00:17:05.614 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:05.614 "assigned_rate_limits": { 00:17:05.614 "rw_ios_per_sec": 0, 00:17:05.614 "rw_mbytes_per_sec": 0, 00:17:05.614 "r_mbytes_per_sec": 0, 00:17:05.614 "w_mbytes_per_sec": 0 00:17:05.614 }, 00:17:05.614 "claimed": true, 00:17:05.614 "claim_type": "exclusive_write", 00:17:05.614 "zoned": false, 00:17:05.614 "supported_io_types": { 00:17:05.614 "read": true, 00:17:05.614 "write": true, 00:17:05.614 "unmap": true, 00:17:05.614 "flush": true, 00:17:05.614 "reset": true, 00:17:05.614 "nvme_admin": false, 00:17:05.614 "nvme_io": false, 00:17:05.614 "nvme_io_md": false, 00:17:05.614 "write_zeroes": true, 00:17:05.614 "zcopy": true, 00:17:05.614 "get_zone_info": false, 00:17:05.614 "zone_management": false, 00:17:05.614 "zone_append": false, 00:17:05.614 "compare": false, 00:17:05.614 "compare_and_write": false, 00:17:05.614 "abort": true, 00:17:05.614 "seek_hole": false, 00:17:05.614 "seek_data": false, 00:17:05.614 "copy": true, 00:17:05.614 "nvme_iov_md": false 00:17:05.614 }, 00:17:05.615 "memory_domains": [ 00:17:05.615 { 00:17:05.615 "dma_device_id": "system", 00:17:05.615 "dma_device_type": 1 00:17:05.615 }, 00:17:05.615 { 00:17:05.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.615 "dma_device_type": 2 00:17:05.615 } 00:17:05.615 ], 00:17:05.615 "driver_specific": { 00:17:05.615 "passthru": { 00:17:05.615 "name": "pt4", 00:17:05.615 "base_bdev_name": "malloc4" 00:17:05.615 } 00:17:05.615 } 00:17:05.615 }' 00:17:05.615 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.615 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.615 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:05.615 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.615 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.876 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:05.876 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.876 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.876 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:05.876 06:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.876 06:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.876 06:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:05.876 06:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:05.876 06:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:17:06.136 [2024-08-14 06:48:33.266755] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.136 06:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=368a7197-f7be-4444-aab3-838adbd17810 00:17:06.136 06:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 368a7197-f7be-4444-aab3-838adbd17810 ']' 00:17:06.136 06:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:06.404 [2024-08-14 06:48:33.482103] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.404 [2024-08-14 06:48:33.482271] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.404 [2024-08-14 06:48:33.482396] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.404 [2024-08-14 06:48:33.482522] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.404 [2024-08-14 06:48:33.482539] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:17:06.404 06:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.404 06:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:17:06.680 06:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:17:06.680 06:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:17:06.680 06:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.680 06:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:06.680 06:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.680 06:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:06.955 06:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.955 06:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:07.214 06:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:07.214 06:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:07.474 06:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:07.474 06:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:07.474 06:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:17:07.474 06:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:07.474 06:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:17:07.474 06:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:07.474 06:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.474 06:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:17:07.474 06:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.474 06:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:17:07.474 06:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.474 06:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:17:07.474 06:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.474 06:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:07.474 06:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:07.734 [2024-08-14 06:48:34.847839] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:07.734 [2024-08-14 06:48:34.850098] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:07.734 [2024-08-14 06:48:34.850150] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:07.734 [2024-08-14 06:48:34.850198] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:07.734 [2024-08-14 06:48:34.850272] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:07.734 [2024-08-14 06:48:34.850347] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:07.734 [2024-08-14 06:48:34.850367] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:07.734 [2024-08-14 06:48:34.850391] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:07.734 [2024-08-14 06:48:34.850405] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.734 [2024-08-14 06:48:34.850421] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:17:07.734 request: 00:17:07.734 { 00:17:07.734 "name": "raid_bdev1", 00:17:07.734 "raid_level": "raid1", 00:17:07.734 "base_bdevs": [ 00:17:07.734 "malloc1", 00:17:07.734 "malloc2", 00:17:07.734 "malloc3", 00:17:07.734 "malloc4" 00:17:07.734 ], 00:17:07.734 "superblock": false, 00:17:07.734 "method": "bdev_raid_create", 00:17:07.734 "req_id": 1 00:17:07.734 } 00:17:07.734 Got JSON-RPC error response 00:17:07.734 response: 00:17:07.734 { 00:17:07.734 "code": -17, 00:17:07.734 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:07.734 } 00:17:07.734 06:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:17:07.734 06:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:17:07.734 06:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:17:07.734 06:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:17:07.734 06:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.734 06:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:17:07.994 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:17:07.994 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:17:07.994 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:08.253 [2024-08-14 06:48:35.255068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:08.253 [2024-08-14 06:48:35.255185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.253 [2024-08-14 06:48:35.255208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:08.253 [2024-08-14 06:48:35.255225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.253 [2024-08-14 06:48:35.257883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.253 [2024-08-14 06:48:35.258005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:08.253 [2024-08-14 06:48:35.258118] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:08.253 [2024-08-14 06:48:35.258210] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:08.253 pt1 00:17:08.253 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:08.253 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:08.253 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:08.253 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:08.253 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:08.253 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:08.253 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.253 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.253 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.253 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.253 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.253 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.253 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.253 "name": "raid_bdev1", 00:17:08.253 "uuid": "368a7197-f7be-4444-aab3-838adbd17810", 00:17:08.253 "strip_size_kb": 0, 00:17:08.253 "state": "configuring", 00:17:08.253 "raid_level": "raid1", 00:17:08.253 "superblock": true, 00:17:08.253 "num_base_bdevs": 4, 00:17:08.253 "num_base_bdevs_discovered": 1, 00:17:08.253 "num_base_bdevs_operational": 4, 00:17:08.253 "base_bdevs_list": [ 00:17:08.253 { 00:17:08.253 "name": "pt1", 00:17:08.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:08.253 "is_configured": true, 00:17:08.253 "data_offset": 2048, 00:17:08.253 "data_size": 63488 00:17:08.253 }, 00:17:08.253 { 00:17:08.253 "name": null, 00:17:08.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.253 "is_configured": false, 00:17:08.253 "data_offset": 2048, 00:17:08.253 "data_size": 63488 00:17:08.253 }, 00:17:08.253 { 00:17:08.253 "name": null, 00:17:08.253 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:08.253 "is_configured": false, 00:17:08.253 "data_offset": 2048, 00:17:08.253 "data_size": 63488 00:17:08.253 }, 00:17:08.253 { 00:17:08.253 "name": null, 00:17:08.253 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:08.253 "is_configured": false, 00:17:08.253 "data_offset": 2048, 00:17:08.253 "data_size": 63488 00:17:08.253 } 00:17:08.253 ] 00:17:08.253 }' 00:17:08.253 06:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.253 06:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.822 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:17:08.822 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:09.081 [2024-08-14 06:48:36.221565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:09.081 [2024-08-14 06:48:36.221796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.081 [2024-08-14 06:48:36.221844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:09.081 [2024-08-14 06:48:36.221886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.081 [2024-08-14 06:48:36.222465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.081 [2024-08-14 06:48:36.222531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:09.081 [2024-08-14 06:48:36.222655] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:09.081 [2024-08-14 06:48:36.222718] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.081 pt2 00:17:09.081 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:09.340 [2024-08-14 06:48:36.429336] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:09.340 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:09.340 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:09.340 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:09.340 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:09.340 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:09.340 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:09.340 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.340 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.340 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.340 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.340 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.340 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.599 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.599 "name": "raid_bdev1", 00:17:09.599 "uuid": "368a7197-f7be-4444-aab3-838adbd17810", 00:17:09.599 "strip_size_kb": 0, 00:17:09.599 "state": "configuring", 00:17:09.599 "raid_level": "raid1", 00:17:09.599 "superblock": true, 00:17:09.599 "num_base_bdevs": 4, 00:17:09.599 "num_base_bdevs_discovered": 1, 00:17:09.599 "num_base_bdevs_operational": 4, 00:17:09.599 "base_bdevs_list": [ 00:17:09.599 { 00:17:09.599 "name": "pt1", 00:17:09.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:09.599 "is_configured": true, 00:17:09.599 "data_offset": 2048, 00:17:09.599 "data_size": 63488 00:17:09.599 }, 00:17:09.599 { 00:17:09.599 "name": null, 00:17:09.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.599 "is_configured": false, 00:17:09.599 "data_offset": 2048, 00:17:09.599 "data_size": 63488 00:17:09.599 }, 00:17:09.599 { 00:17:09.599 "name": null, 00:17:09.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:09.599 "is_configured": false, 00:17:09.599 "data_offset": 2048, 00:17:09.599 "data_size": 63488 00:17:09.599 }, 00:17:09.599 { 00:17:09.599 "name": null, 00:17:09.599 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:09.599 "is_configured": false, 00:17:09.599 "data_offset": 2048, 00:17:09.599 "data_size": 63488 00:17:09.599 } 00:17:09.599 ] 00:17:09.599 }' 00:17:09.599 06:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.599 06:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.167 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:17:10.167 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:10.167 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:10.167 [2024-08-14 06:48:37.359676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:10.167 [2024-08-14 06:48:37.359783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.167 [2024-08-14 06:48:37.359812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:10.167 [2024-08-14 06:48:37.359822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.167 [2024-08-14 06:48:37.360350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.167 [2024-08-14 06:48:37.360371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:10.167 [2024-08-14 06:48:37.360472] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:10.167 [2024-08-14 06:48:37.360497] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:10.167 pt2 00:17:10.167 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:17:10.167 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:10.167 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:10.426 [2024-08-14 06:48:37.575297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:10.426 [2024-08-14 06:48:37.575397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.426 [2024-08-14 06:48:37.575436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:10.426 [2024-08-14 06:48:37.575447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.426 [2024-08-14 06:48:37.576037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.426 [2024-08-14 06:48:37.576056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:10.426 [2024-08-14 06:48:37.576164] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:10.426 [2024-08-14 06:48:37.576212] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:10.426 pt3 00:17:10.426 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:17:10.426 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:10.426 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:10.685 [2024-08-14 06:48:37.782977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:10.685 [2024-08-14 06:48:37.783087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.685 [2024-08-14 06:48:37.783120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:10.685 [2024-08-14 06:48:37.783130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.685 [2024-08-14 06:48:37.783660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.685 [2024-08-14 06:48:37.783802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:10.685 [2024-08-14 06:48:37.783930] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:10.685 [2024-08-14 06:48:37.783960] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:10.685 [2024-08-14 06:48:37.784110] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:17:10.685 [2024-08-14 06:48:37.784119] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:10.685 [2024-08-14 06:48:37.784400] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:17:10.685 [2024-08-14 06:48:37.784542] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:17:10.685 [2024-08-14 06:48:37.784558] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:17:10.685 [2024-08-14 06:48:37.784663] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.685 pt4 00:17:10.685 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:17:10.685 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:10.685 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:10.685 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:10.685 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:10.685 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:10.685 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:10.685 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:10.685 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.685 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.685 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.685 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.685 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.685 06:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.945 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.945 "name": "raid_bdev1", 00:17:10.945 "uuid": "368a7197-f7be-4444-aab3-838adbd17810", 00:17:10.945 "strip_size_kb": 0, 00:17:10.945 "state": "online", 00:17:10.945 "raid_level": "raid1", 00:17:10.945 "superblock": true, 00:17:10.945 "num_base_bdevs": 4, 00:17:10.946 "num_base_bdevs_discovered": 4, 00:17:10.946 "num_base_bdevs_operational": 4, 00:17:10.946 "base_bdevs_list": [ 00:17:10.946 { 00:17:10.946 "name": "pt1", 00:17:10.946 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:10.946 "is_configured": true, 00:17:10.946 "data_offset": 2048, 00:17:10.946 "data_size": 63488 00:17:10.946 }, 00:17:10.946 { 00:17:10.946 "name": "pt2", 00:17:10.946 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.946 "is_configured": true, 00:17:10.946 "data_offset": 2048, 00:17:10.946 "data_size": 63488 00:17:10.946 }, 00:17:10.946 { 00:17:10.946 "name": "pt3", 00:17:10.946 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:10.946 "is_configured": true, 00:17:10.946 "data_offset": 2048, 00:17:10.946 "data_size": 63488 00:17:10.946 }, 00:17:10.946 { 00:17:10.946 "name": "pt4", 00:17:10.946 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:10.946 "is_configured": true, 00:17:10.946 "data_offset": 2048, 00:17:10.946 "data_size": 63488 00:17:10.946 } 00:17:10.946 ] 00:17:10.946 }' 00:17:10.946 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.946 06:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.514 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:17:11.515 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:11.515 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:11.515 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:11.515 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:11.515 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:11.515 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:11.515 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:11.515 [2024-08-14 06:48:38.729933] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:11.515 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:11.515 "name": "raid_bdev1", 00:17:11.515 "aliases": [ 00:17:11.515 "368a7197-f7be-4444-aab3-838adbd17810" 00:17:11.515 ], 00:17:11.515 "product_name": "Raid Volume", 00:17:11.515 "block_size": 512, 00:17:11.515 "num_blocks": 63488, 00:17:11.515 "uuid": "368a7197-f7be-4444-aab3-838adbd17810", 00:17:11.515 "assigned_rate_limits": { 00:17:11.515 "rw_ios_per_sec": 0, 00:17:11.515 "rw_mbytes_per_sec": 0, 00:17:11.515 "r_mbytes_per_sec": 0, 00:17:11.515 "w_mbytes_per_sec": 0 00:17:11.515 }, 00:17:11.515 "claimed": false, 00:17:11.515 "zoned": false, 00:17:11.515 "supported_io_types": { 00:17:11.515 "read": true, 00:17:11.515 "write": true, 00:17:11.515 "unmap": false, 00:17:11.515 "flush": false, 00:17:11.515 "reset": true, 00:17:11.515 "nvme_admin": false, 00:17:11.515 "nvme_io": false, 00:17:11.515 "nvme_io_md": false, 00:17:11.515 "write_zeroes": true, 00:17:11.515 "zcopy": false, 00:17:11.515 "get_zone_info": false, 00:17:11.515 "zone_management": false, 00:17:11.515 "zone_append": false, 00:17:11.515 "compare": false, 00:17:11.515 "compare_and_write": false, 00:17:11.515 "abort": false, 00:17:11.515 "seek_hole": false, 00:17:11.515 "seek_data": false, 00:17:11.515 "copy": false, 00:17:11.515 "nvme_iov_md": false 00:17:11.515 }, 00:17:11.515 "memory_domains": [ 00:17:11.515 { 00:17:11.515 "dma_device_id": "system", 00:17:11.515 "dma_device_type": 1 00:17:11.515 }, 00:17:11.515 { 00:17:11.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.515 "dma_device_type": 2 00:17:11.515 }, 00:17:11.515 { 00:17:11.515 "dma_device_id": "system", 00:17:11.515 "dma_device_type": 1 00:17:11.515 }, 00:17:11.515 { 00:17:11.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.515 "dma_device_type": 2 00:17:11.515 }, 00:17:11.515 { 00:17:11.515 "dma_device_id": "system", 00:17:11.515 "dma_device_type": 1 00:17:11.515 }, 00:17:11.515 { 00:17:11.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.515 "dma_device_type": 2 00:17:11.515 }, 00:17:11.515 { 00:17:11.515 "dma_device_id": "system", 00:17:11.515 "dma_device_type": 1 00:17:11.515 }, 00:17:11.515 { 00:17:11.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.515 "dma_device_type": 2 00:17:11.515 } 00:17:11.515 ], 00:17:11.515 "driver_specific": { 00:17:11.515 "raid": { 00:17:11.515 "uuid": "368a7197-f7be-4444-aab3-838adbd17810", 00:17:11.515 "strip_size_kb": 0, 00:17:11.515 "state": "online", 00:17:11.515 "raid_level": "raid1", 00:17:11.515 "superblock": true, 00:17:11.515 "num_base_bdevs": 4, 00:17:11.515 "num_base_bdevs_discovered": 4, 00:17:11.515 "num_base_bdevs_operational": 4, 00:17:11.515 "base_bdevs_list": [ 00:17:11.515 { 00:17:11.515 "name": "pt1", 00:17:11.515 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:11.515 "is_configured": true, 00:17:11.515 "data_offset": 2048, 00:17:11.515 "data_size": 63488 00:17:11.515 }, 00:17:11.515 { 00:17:11.515 "name": "pt2", 00:17:11.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.515 "is_configured": true, 00:17:11.515 "data_offset": 2048, 00:17:11.515 "data_size": 63488 00:17:11.515 }, 00:17:11.515 { 00:17:11.515 "name": "pt3", 00:17:11.515 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:11.515 "is_configured": true, 00:17:11.515 "data_offset": 2048, 00:17:11.515 "data_size": 63488 00:17:11.515 }, 00:17:11.515 { 00:17:11.515 "name": "pt4", 00:17:11.515 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:11.515 "is_configured": true, 00:17:11.515 "data_offset": 2048, 00:17:11.515 "data_size": 63488 00:17:11.515 } 00:17:11.515 ] 00:17:11.515 } 00:17:11.515 } 00:17:11.515 }' 00:17:11.515 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:11.775 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:11.775 pt2 00:17:11.775 pt3 00:17:11.775 pt4' 00:17:11.775 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:11.775 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:11.775 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:11.775 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:11.775 "name": "pt1", 00:17:11.775 "aliases": [ 00:17:11.775 "00000000-0000-0000-0000-000000000001" 00:17:11.775 ], 00:17:11.775 "product_name": "passthru", 00:17:11.775 "block_size": 512, 00:17:11.775 "num_blocks": 65536, 00:17:11.775 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:11.775 "assigned_rate_limits": { 00:17:11.775 "rw_ios_per_sec": 0, 00:17:11.775 "rw_mbytes_per_sec": 0, 00:17:11.775 "r_mbytes_per_sec": 0, 00:17:11.775 "w_mbytes_per_sec": 0 00:17:11.775 }, 00:17:11.775 "claimed": true, 00:17:11.775 "claim_type": "exclusive_write", 00:17:11.775 "zoned": false, 00:17:11.775 "supported_io_types": { 00:17:11.775 "read": true, 00:17:11.775 "write": true, 00:17:11.775 "unmap": true, 00:17:11.775 "flush": true, 00:17:11.775 "reset": true, 00:17:11.775 "nvme_admin": false, 00:17:11.775 "nvme_io": false, 00:17:11.775 "nvme_io_md": false, 00:17:11.775 "write_zeroes": true, 00:17:11.775 "zcopy": true, 00:17:11.775 "get_zone_info": false, 00:17:11.775 "zone_management": false, 00:17:11.775 "zone_append": false, 00:17:11.775 "compare": false, 00:17:11.775 "compare_and_write": false, 00:17:11.775 "abort": true, 00:17:11.775 "seek_hole": false, 00:17:11.775 "seek_data": false, 00:17:11.775 "copy": true, 00:17:11.775 "nvme_iov_md": false 00:17:11.775 }, 00:17:11.775 "memory_domains": [ 00:17:11.775 { 00:17:11.775 "dma_device_id": "system", 00:17:11.775 "dma_device_type": 1 00:17:11.775 }, 00:17:11.775 { 00:17:11.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.775 "dma_device_type": 2 00:17:11.775 } 00:17:11.775 ], 00:17:11.775 "driver_specific": { 00:17:11.775 "passthru": { 00:17:11.775 "name": "pt1", 00:17:11.775 "base_bdev_name": "malloc1" 00:17:11.775 } 00:17:11.775 } 00:17:11.775 }' 00:17:11.775 06:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.035 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.035 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:12.035 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.035 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.035 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:12.035 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.035 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.035 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:12.035 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.035 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.294 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:12.294 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:12.294 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:12.295 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:12.295 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:12.295 "name": "pt2", 00:17:12.295 "aliases": [ 00:17:12.295 "00000000-0000-0000-0000-000000000002" 00:17:12.295 ], 00:17:12.295 "product_name": "passthru", 00:17:12.295 "block_size": 512, 00:17:12.295 "num_blocks": 65536, 00:17:12.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.295 "assigned_rate_limits": { 00:17:12.295 "rw_ios_per_sec": 0, 00:17:12.295 "rw_mbytes_per_sec": 0, 00:17:12.295 "r_mbytes_per_sec": 0, 00:17:12.295 "w_mbytes_per_sec": 0 00:17:12.295 }, 00:17:12.295 "claimed": true, 00:17:12.295 "claim_type": "exclusive_write", 00:17:12.295 "zoned": false, 00:17:12.295 "supported_io_types": { 00:17:12.295 "read": true, 00:17:12.295 "write": true, 00:17:12.295 "unmap": true, 00:17:12.295 "flush": true, 00:17:12.295 "reset": true, 00:17:12.295 "nvme_admin": false, 00:17:12.295 "nvme_io": false, 00:17:12.295 "nvme_io_md": false, 00:17:12.295 "write_zeroes": true, 00:17:12.295 "zcopy": true, 00:17:12.295 "get_zone_info": false, 00:17:12.295 "zone_management": false, 00:17:12.295 "zone_append": false, 00:17:12.295 "compare": false, 00:17:12.295 "compare_and_write": false, 00:17:12.295 "abort": true, 00:17:12.295 "seek_hole": false, 00:17:12.295 "seek_data": false, 00:17:12.295 "copy": true, 00:17:12.295 "nvme_iov_md": false 00:17:12.295 }, 00:17:12.295 "memory_domains": [ 00:17:12.295 { 00:17:12.295 "dma_device_id": "system", 00:17:12.295 "dma_device_type": 1 00:17:12.295 }, 00:17:12.295 { 00:17:12.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.295 "dma_device_type": 2 00:17:12.295 } 00:17:12.295 ], 00:17:12.295 "driver_specific": { 00:17:12.295 "passthru": { 00:17:12.295 "name": "pt2", 00:17:12.295 "base_bdev_name": "malloc2" 00:17:12.295 } 00:17:12.295 } 00:17:12.295 }' 00:17:12.295 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.555 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.555 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:12.555 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.555 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.555 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:12.555 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.555 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.555 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:12.555 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.814 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.814 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:12.814 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:12.814 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:12.814 06:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:13.074 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:13.074 "name": "pt3", 00:17:13.074 "aliases": [ 00:17:13.074 "00000000-0000-0000-0000-000000000003" 00:17:13.074 ], 00:17:13.074 "product_name": "passthru", 00:17:13.074 "block_size": 512, 00:17:13.074 "num_blocks": 65536, 00:17:13.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:13.074 "assigned_rate_limits": { 00:17:13.074 "rw_ios_per_sec": 0, 00:17:13.074 "rw_mbytes_per_sec": 0, 00:17:13.074 "r_mbytes_per_sec": 0, 00:17:13.074 "w_mbytes_per_sec": 0 00:17:13.074 }, 00:17:13.074 "claimed": true, 00:17:13.074 "claim_type": "exclusive_write", 00:17:13.074 "zoned": false, 00:17:13.074 "supported_io_types": { 00:17:13.074 "read": true, 00:17:13.074 "write": true, 00:17:13.074 "unmap": true, 00:17:13.074 "flush": true, 00:17:13.074 "reset": true, 00:17:13.074 "nvme_admin": false, 00:17:13.074 "nvme_io": false, 00:17:13.074 "nvme_io_md": false, 00:17:13.074 "write_zeroes": true, 00:17:13.074 "zcopy": true, 00:17:13.074 "get_zone_info": false, 00:17:13.074 "zone_management": false, 00:17:13.074 "zone_append": false, 00:17:13.074 "compare": false, 00:17:13.074 "compare_and_write": false, 00:17:13.074 "abort": true, 00:17:13.074 "seek_hole": false, 00:17:13.074 "seek_data": false, 00:17:13.074 "copy": true, 00:17:13.074 "nvme_iov_md": false 00:17:13.074 }, 00:17:13.074 "memory_domains": [ 00:17:13.074 { 00:17:13.074 "dma_device_id": "system", 00:17:13.074 "dma_device_type": 1 00:17:13.074 }, 00:17:13.074 { 00:17:13.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.074 "dma_device_type": 2 00:17:13.074 } 00:17:13.074 ], 00:17:13.074 "driver_specific": { 00:17:13.074 "passthru": { 00:17:13.074 "name": "pt3", 00:17:13.074 "base_bdev_name": "malloc3" 00:17:13.074 } 00:17:13.074 } 00:17:13.074 }' 00:17:13.074 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.074 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.074 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:13.074 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.074 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.074 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:13.074 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.074 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.334 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:13.335 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.335 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.335 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:13.335 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:13.335 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:17:13.335 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:13.595 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:13.595 "name": "pt4", 00:17:13.595 "aliases": [ 00:17:13.595 "00000000-0000-0000-0000-000000000004" 00:17:13.595 ], 00:17:13.595 "product_name": "passthru", 00:17:13.595 "block_size": 512, 00:17:13.595 "num_blocks": 65536, 00:17:13.595 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:13.595 "assigned_rate_limits": { 00:17:13.595 "rw_ios_per_sec": 0, 00:17:13.595 "rw_mbytes_per_sec": 0, 00:17:13.595 "r_mbytes_per_sec": 0, 00:17:13.595 "w_mbytes_per_sec": 0 00:17:13.595 }, 00:17:13.595 "claimed": true, 00:17:13.595 "claim_type": "exclusive_write", 00:17:13.595 "zoned": false, 00:17:13.595 "supported_io_types": { 00:17:13.595 "read": true, 00:17:13.595 "write": true, 00:17:13.595 "unmap": true, 00:17:13.595 "flush": true, 00:17:13.595 "reset": true, 00:17:13.595 "nvme_admin": false, 00:17:13.595 "nvme_io": false, 00:17:13.595 "nvme_io_md": false, 00:17:13.595 "write_zeroes": true, 00:17:13.595 "zcopy": true, 00:17:13.595 "get_zone_info": false, 00:17:13.595 "zone_management": false, 00:17:13.595 "zone_append": false, 00:17:13.595 "compare": false, 00:17:13.595 "compare_and_write": false, 00:17:13.595 "abort": true, 00:17:13.595 "seek_hole": false, 00:17:13.595 "seek_data": false, 00:17:13.595 "copy": true, 00:17:13.595 "nvme_iov_md": false 00:17:13.595 }, 00:17:13.595 "memory_domains": [ 00:17:13.595 { 00:17:13.595 "dma_device_id": "system", 00:17:13.595 "dma_device_type": 1 00:17:13.595 }, 00:17:13.595 { 00:17:13.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.595 "dma_device_type": 2 00:17:13.595 } 00:17:13.595 ], 00:17:13.595 "driver_specific": { 00:17:13.595 "passthru": { 00:17:13.595 "name": "pt4", 00:17:13.595 "base_bdev_name": "malloc4" 00:17:13.595 } 00:17:13.595 } 00:17:13.595 }' 00:17:13.595 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.595 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.595 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:13.595 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.595 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.595 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:13.595 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.595 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.855 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:13.855 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.855 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.855 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:13.855 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:13.855 06:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:17:14.115 [2024-08-14 06:48:41.146309] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.115 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 368a7197-f7be-4444-aab3-838adbd17810 '!=' 368a7197-f7be-4444-aab3-838adbd17810 ']' 00:17:14.115 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:17:14.115 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:14.115 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:14.115 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:14.115 [2024-08-14 06:48:41.349732] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:14.374 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:14.374 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:14.374 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:14.374 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:14.374 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:14.374 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:14.374 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.374 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.374 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.374 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.374 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.374 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.374 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:14.374 "name": "raid_bdev1", 00:17:14.374 "uuid": "368a7197-f7be-4444-aab3-838adbd17810", 00:17:14.374 "strip_size_kb": 0, 00:17:14.374 "state": "online", 00:17:14.374 "raid_level": "raid1", 00:17:14.374 "superblock": true, 00:17:14.374 "num_base_bdevs": 4, 00:17:14.374 "num_base_bdevs_discovered": 3, 00:17:14.374 "num_base_bdevs_operational": 3, 00:17:14.374 "base_bdevs_list": [ 00:17:14.374 { 00:17:14.374 "name": null, 00:17:14.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.374 "is_configured": false, 00:17:14.374 "data_offset": 2048, 00:17:14.374 "data_size": 63488 00:17:14.374 }, 00:17:14.374 { 00:17:14.374 "name": "pt2", 00:17:14.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.374 "is_configured": true, 00:17:14.374 "data_offset": 2048, 00:17:14.374 "data_size": 63488 00:17:14.374 }, 00:17:14.374 { 00:17:14.374 "name": "pt3", 00:17:14.374 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.374 "is_configured": true, 00:17:14.374 "data_offset": 2048, 00:17:14.374 "data_size": 63488 00:17:14.374 }, 00:17:14.374 { 00:17:14.374 "name": "pt4", 00:17:14.374 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:14.374 "is_configured": true, 00:17:14.374 "data_offset": 2048, 00:17:14.375 "data_size": 63488 00:17:14.375 } 00:17:14.375 ] 00:17:14.375 }' 00:17:14.375 06:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:14.375 06:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.943 06:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:15.203 [2024-08-14 06:48:42.324050] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.203 [2024-08-14 06:48:42.324240] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.203 [2024-08-14 06:48:42.324384] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.203 [2024-08-14 06:48:42.324496] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.203 [2024-08-14 06:48:42.324541] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:17:15.203 06:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:17:15.203 06:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.462 06:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:17:15.462 06:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:17:15.462 06:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:17:15.462 06:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:17:15.462 06:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:15.722 06:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:17:15.722 06:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:17:15.722 06:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:15.722 06:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:17:15.722 06:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:17:15.722 06:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:15.992 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:17:15.993 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:17:15.993 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:17:15.993 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:17:15.993 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:16.255 [2024-08-14 06:48:43.310386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:16.255 [2024-08-14 06:48:43.310573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.255 [2024-08-14 06:48:43.310617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:16.255 [2024-08-14 06:48:43.310653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.255 [2024-08-14 06:48:43.313205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.255 [2024-08-14 06:48:43.313282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:16.255 [2024-08-14 06:48:43.313409] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:16.255 [2024-08-14 06:48:43.313476] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:16.255 pt2 00:17:16.255 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:16.255 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:16.255 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:16.255 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:16.255 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:16.255 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:16.255 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:16.255 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:16.255 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:16.255 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:16.255 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.255 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.515 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.515 "name": "raid_bdev1", 00:17:16.515 "uuid": "368a7197-f7be-4444-aab3-838adbd17810", 00:17:16.515 "strip_size_kb": 0, 00:17:16.515 "state": "configuring", 00:17:16.515 "raid_level": "raid1", 00:17:16.515 "superblock": true, 00:17:16.515 "num_base_bdevs": 4, 00:17:16.515 "num_base_bdevs_discovered": 1, 00:17:16.515 "num_base_bdevs_operational": 3, 00:17:16.515 "base_bdevs_list": [ 00:17:16.515 { 00:17:16.515 "name": null, 00:17:16.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.515 "is_configured": false, 00:17:16.515 "data_offset": 2048, 00:17:16.515 "data_size": 63488 00:17:16.515 }, 00:17:16.515 { 00:17:16.515 "name": "pt2", 00:17:16.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.515 "is_configured": true, 00:17:16.515 "data_offset": 2048, 00:17:16.515 "data_size": 63488 00:17:16.515 }, 00:17:16.515 { 00:17:16.515 "name": null, 00:17:16.515 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.515 "is_configured": false, 00:17:16.515 "data_offset": 2048, 00:17:16.515 "data_size": 63488 00:17:16.515 }, 00:17:16.515 { 00:17:16.515 "name": null, 00:17:16.515 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:16.515 "is_configured": false, 00:17:16.515 "data_offset": 2048, 00:17:16.515 "data_size": 63488 00:17:16.515 } 00:17:16.515 ] 00:17:16.515 }' 00:17:16.515 06:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.515 06:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.085 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:17:17.085 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:17:17.085 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:17.345 [2024-08-14 06:48:44.376994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:17.345 [2024-08-14 06:48:44.377213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.345 [2024-08-14 06:48:44.377249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:17.345 [2024-08-14 06:48:44.377261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.345 [2024-08-14 06:48:44.377872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.345 [2024-08-14 06:48:44.377896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:17.345 [2024-08-14 06:48:44.378006] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:17.345 [2024-08-14 06:48:44.378049] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:17.345 pt3 00:17:17.345 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:17.345 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:17.345 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:17.345 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:17.345 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:17.345 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:17.345 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.345 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.345 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.345 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.345 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.345 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.345 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.345 "name": "raid_bdev1", 00:17:17.345 "uuid": "368a7197-f7be-4444-aab3-838adbd17810", 00:17:17.345 "strip_size_kb": 0, 00:17:17.345 "state": "configuring", 00:17:17.345 "raid_level": "raid1", 00:17:17.345 "superblock": true, 00:17:17.345 "num_base_bdevs": 4, 00:17:17.345 "num_base_bdevs_discovered": 2, 00:17:17.345 "num_base_bdevs_operational": 3, 00:17:17.345 "base_bdevs_list": [ 00:17:17.345 { 00:17:17.345 "name": null, 00:17:17.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.345 "is_configured": false, 00:17:17.345 "data_offset": 2048, 00:17:17.345 "data_size": 63488 00:17:17.345 }, 00:17:17.345 { 00:17:17.345 "name": "pt2", 00:17:17.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.345 "is_configured": true, 00:17:17.345 "data_offset": 2048, 00:17:17.345 "data_size": 63488 00:17:17.345 }, 00:17:17.345 { 00:17:17.345 "name": "pt3", 00:17:17.345 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.345 "is_configured": true, 00:17:17.345 "data_offset": 2048, 00:17:17.345 "data_size": 63488 00:17:17.345 }, 00:17:17.345 { 00:17:17.345 "name": null, 00:17:17.345 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:17.345 "is_configured": false, 00:17:17.345 "data_offset": 2048, 00:17:17.345 "data_size": 63488 00:17:17.345 } 00:17:17.345 ] 00:17:17.345 }' 00:17:17.345 06:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.345 06:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.930 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:17:17.930 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:17:17.930 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:17.930 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:18.211 [2024-08-14 06:48:45.351346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:18.211 [2024-08-14 06:48:45.351562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.211 [2024-08-14 06:48:45.351608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:18.211 [2024-08-14 06:48:45.351638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.211 [2024-08-14 06:48:45.352264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.211 [2024-08-14 06:48:45.352330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:18.211 [2024-08-14 06:48:45.352470] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:18.211 [2024-08-14 06:48:45.352529] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:18.211 [2024-08-14 06:48:45.352694] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:17:18.211 [2024-08-14 06:48:45.352731] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:18.211 [2024-08-14 06:48:45.353026] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:17:18.211 [2024-08-14 06:48:45.353227] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:17:18.211 [2024-08-14 06:48:45.353283] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:17:18.211 [2024-08-14 06:48:45.353438] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.211 pt4 00:17:18.211 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:18.211 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:18.211 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:18.211 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:18.211 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:18.211 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:18.211 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.211 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.211 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.211 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.211 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.211 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.473 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.473 "name": "raid_bdev1", 00:17:18.473 "uuid": "368a7197-f7be-4444-aab3-838adbd17810", 00:17:18.473 "strip_size_kb": 0, 00:17:18.473 "state": "online", 00:17:18.473 "raid_level": "raid1", 00:17:18.473 "superblock": true, 00:17:18.473 "num_base_bdevs": 4, 00:17:18.473 "num_base_bdevs_discovered": 3, 00:17:18.473 "num_base_bdevs_operational": 3, 00:17:18.473 "base_bdevs_list": [ 00:17:18.473 { 00:17:18.473 "name": null, 00:17:18.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.473 "is_configured": false, 00:17:18.473 "data_offset": 2048, 00:17:18.473 "data_size": 63488 00:17:18.473 }, 00:17:18.473 { 00:17:18.473 "name": "pt2", 00:17:18.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.473 "is_configured": true, 00:17:18.473 "data_offset": 2048, 00:17:18.473 "data_size": 63488 00:17:18.473 }, 00:17:18.473 { 00:17:18.473 "name": "pt3", 00:17:18.473 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:18.473 "is_configured": true, 00:17:18.473 "data_offset": 2048, 00:17:18.473 "data_size": 63488 00:17:18.473 }, 00:17:18.473 { 00:17:18.473 "name": "pt4", 00:17:18.473 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:18.473 "is_configured": true, 00:17:18.473 "data_offset": 2048, 00:17:18.473 "data_size": 63488 00:17:18.473 } 00:17:18.473 ] 00:17:18.473 }' 00:17:18.473 06:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.473 06:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.042 06:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:19.302 [2024-08-14 06:48:46.361951] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.302 [2024-08-14 06:48:46.362015] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.302 [2024-08-14 06:48:46.362133] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.302 [2024-08-14 06:48:46.362274] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.302 [2024-08-14 06:48:46.362293] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:17:19.302 06:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.302 06:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:17:19.562 06:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:17:19.562 06:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:17:19.562 06:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:17:19.562 06:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:17:19.562 06:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:19.820 06:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:19.820 [2024-08-14 06:48:47.033057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:19.820 [2024-08-14 06:48:47.033216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.820 [2024-08-14 06:48:47.033250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:19.820 [2024-08-14 06:48:47.033266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.820 [2024-08-14 06:48:47.036328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.820 [2024-08-14 06:48:47.036385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:19.820 [2024-08-14 06:48:47.036519] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:19.820 [2024-08-14 06:48:47.036587] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.820 [2024-08-14 06:48:47.036749] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:19.820 [2024-08-14 06:48:47.036770] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.820 [2024-08-14 06:48:47.036792] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:17:19.820 [2024-08-14 06:48:47.036838] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.820 pt1 00:17:19.820 [2024-08-14 06:48:47.036989] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:19.820 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:17:19.820 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:19.820 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:19.821 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:19.821 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:19.821 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:19.821 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:19.821 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:19.821 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:19.821 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:19.821 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:19.821 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.821 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.080 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:20.080 "name": "raid_bdev1", 00:17:20.080 "uuid": "368a7197-f7be-4444-aab3-838adbd17810", 00:17:20.080 "strip_size_kb": 0, 00:17:20.080 "state": "configuring", 00:17:20.080 "raid_level": "raid1", 00:17:20.080 "superblock": true, 00:17:20.080 "num_base_bdevs": 4, 00:17:20.080 "num_base_bdevs_discovered": 2, 00:17:20.080 "num_base_bdevs_operational": 3, 00:17:20.080 "base_bdevs_list": [ 00:17:20.080 { 00:17:20.080 "name": null, 00:17:20.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.080 "is_configured": false, 00:17:20.080 "data_offset": 2048, 00:17:20.080 "data_size": 63488 00:17:20.080 }, 00:17:20.080 { 00:17:20.080 "name": "pt2", 00:17:20.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.080 "is_configured": true, 00:17:20.080 "data_offset": 2048, 00:17:20.080 "data_size": 63488 00:17:20.080 }, 00:17:20.080 { 00:17:20.080 "name": "pt3", 00:17:20.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:20.080 "is_configured": true, 00:17:20.080 "data_offset": 2048, 00:17:20.080 "data_size": 63488 00:17:20.080 }, 00:17:20.080 { 00:17:20.080 "name": null, 00:17:20.080 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:20.080 "is_configured": false, 00:17:20.080 "data_offset": 2048, 00:17:20.080 "data_size": 63488 00:17:20.080 } 00:17:20.080 ] 00:17:20.080 }' 00:17:20.080 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:20.080 06:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.649 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:17:20.649 06:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:20.908 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:17:20.908 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:21.168 [2024-08-14 06:48:48.303048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:21.168 [2024-08-14 06:48:48.303287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.168 [2024-08-14 06:48:48.303343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:21.168 [2024-08-14 06:48:48.303377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.168 [2024-08-14 06:48:48.303945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.168 [2024-08-14 06:48:48.304012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:21.168 [2024-08-14 06:48:48.304154] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:21.168 [2024-08-14 06:48:48.304229] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:21.168 [2024-08-14 06:48:48.304444] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:17:21.168 [2024-08-14 06:48:48.304492] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:21.168 [2024-08-14 06:48:48.304805] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:17:21.168 [2024-08-14 06:48:48.304973] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:17:21.168 [2024-08-14 06:48:48.305019] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:17:21.168 [2024-08-14 06:48:48.305185] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.168 pt4 00:17:21.168 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:21.168 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:21.168 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:21.168 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:21.168 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:21.168 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:21.168 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.168 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.168 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.168 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.168 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.168 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.428 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.428 "name": "raid_bdev1", 00:17:21.428 "uuid": "368a7197-f7be-4444-aab3-838adbd17810", 00:17:21.428 "strip_size_kb": 0, 00:17:21.428 "state": "online", 00:17:21.428 "raid_level": "raid1", 00:17:21.428 "superblock": true, 00:17:21.428 "num_base_bdevs": 4, 00:17:21.428 "num_base_bdevs_discovered": 3, 00:17:21.428 "num_base_bdevs_operational": 3, 00:17:21.428 "base_bdevs_list": [ 00:17:21.428 { 00:17:21.428 "name": null, 00:17:21.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.428 "is_configured": false, 00:17:21.428 "data_offset": 2048, 00:17:21.428 "data_size": 63488 00:17:21.428 }, 00:17:21.428 { 00:17:21.428 "name": "pt2", 00:17:21.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.428 "is_configured": true, 00:17:21.428 "data_offset": 2048, 00:17:21.428 "data_size": 63488 00:17:21.428 }, 00:17:21.428 { 00:17:21.428 "name": "pt3", 00:17:21.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:21.428 "is_configured": true, 00:17:21.428 "data_offset": 2048, 00:17:21.428 "data_size": 63488 00:17:21.428 }, 00:17:21.428 { 00:17:21.428 "name": "pt4", 00:17:21.428 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:21.428 "is_configured": true, 00:17:21.428 "data_offset": 2048, 00:17:21.428 "data_size": 63488 00:17:21.428 } 00:17:21.428 ] 00:17:21.428 }' 00:17:21.428 06:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.428 06:48:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.997 06:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:21.997 06:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:22.257 06:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:17:22.257 06:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:22.257 06:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:17:22.257 [2024-08-14 06:48:49.497505] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.517 06:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 368a7197-f7be-4444-aab3-838adbd17810 '!=' 368a7197-f7be-4444-aab3-838adbd17810 ']' 00:17:22.517 06:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 91352 00:17:22.517 06:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 91352 ']' 00:17:22.517 06:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 91352 00:17:22.517 06:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:17:22.517 06:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:22.517 06:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91352 00:17:22.517 06:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:22.517 06:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:22.517 06:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91352' 00:17:22.517 killing process with pid 91352 00:17:22.517 06:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 91352 00:17:22.517 [2024-08-14 06:48:49.565148] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.517 06:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 91352 00:17:22.517 [2024-08-14 06:48:49.565351] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.517 [2024-08-14 06:48:49.565446] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.517 [2024-08-14 06:48:49.565472] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:17:22.517 [2024-08-14 06:48:49.649135] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.777 06:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:17:22.777 00:17:22.777 real 0m23.154s 00:17:22.777 user 0m42.663s 00:17:22.777 sys 0m3.602s 00:17:22.777 06:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:22.777 ************************************ 00:17:22.777 END TEST raid_superblock_test 00:17:22.777 ************************************ 00:17:22.777 06:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.037 06:48:50 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:17:23.037 06:48:50 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:23.037 06:48:50 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:23.037 06:48:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.037 ************************************ 00:17:23.037 START TEST raid_read_error_test 00:17:23.037 ************************************ 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 4 read 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.R6u9Xlf1Wy 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=92149 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 92149 /var/tmp/spdk-raid.sock 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 92149 ']' 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:23.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:23.037 06:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.037 [2024-08-14 06:48:50.206362] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:17:23.037 [2024-08-14 06:48:50.206641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92149 ] 00:17:23.296 [2024-08-14 06:48:50.356019] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.296 [2024-08-14 06:48:50.438357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.296 [2024-08-14 06:48:50.516561] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.296 [2024-08-14 06:48:50.516745] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.865 06:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:23.865 06:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:17:23.865 06:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:23.865 06:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:24.124 BaseBdev1_malloc 00:17:24.124 06:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:24.383 true 00:17:24.383 06:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:24.643 [2024-08-14 06:48:51.705430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:24.643 [2024-08-14 06:48:51.705662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.643 [2024-08-14 06:48:51.705697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:17:24.643 [2024-08-14 06:48:51.705725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.643 [2024-08-14 06:48:51.708493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.643 [2024-08-14 06:48:51.708545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:24.643 BaseBdev1 00:17:24.643 06:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:24.643 06:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:24.903 BaseBdev2_malloc 00:17:24.903 06:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:25.162 true 00:17:25.162 06:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:25.162 [2024-08-14 06:48:52.392197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:25.162 [2024-08-14 06:48:52.392318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.162 [2024-08-14 06:48:52.392349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:25.162 [2024-08-14 06:48:52.392364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.162 [2024-08-14 06:48:52.395061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.162 [2024-08-14 06:48:52.395109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:25.162 BaseBdev2 00:17:25.162 06:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:25.162 06:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:25.421 BaseBdev3_malloc 00:17:25.421 06:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:25.680 true 00:17:25.680 06:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:25.939 [2024-08-14 06:48:53.073394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:25.939 [2024-08-14 06:48:53.073622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.939 [2024-08-14 06:48:53.073657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:17:25.939 [2024-08-14 06:48:53.073671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.939 [2024-08-14 06:48:53.076437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.939 [2024-08-14 06:48:53.076485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:25.939 BaseBdev3 00:17:25.939 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:25.939 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:26.199 BaseBdev4_malloc 00:17:26.199 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:26.459 true 00:17:26.459 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:26.459 [2024-08-14 06:48:53.703853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:26.459 [2024-08-14 06:48:53.703968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.459 [2024-08-14 06:48:53.703999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:26.459 [2024-08-14 06:48:53.704017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.459 [2024-08-14 06:48:53.706666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.459 [2024-08-14 06:48:53.706718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:26.459 BaseBdev4 00:17:26.717 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:26.717 [2024-08-14 06:48:53.915541] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.717 [2024-08-14 06:48:53.917860] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:26.717 [2024-08-14 06:48:53.917960] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:26.717 [2024-08-14 06:48:53.918035] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:26.718 [2024-08-14 06:48:53.918305] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:17:26.718 [2024-08-14 06:48:53.918323] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:26.718 [2024-08-14 06:48:53.918683] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:17:26.718 [2024-08-14 06:48:53.918872] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:17:26.718 [2024-08-14 06:48:53.918883] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:17:26.718 [2024-08-14 06:48:53.919097] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.718 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:26.718 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:26.718 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:26.718 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:26.718 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:26.718 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:26.718 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.718 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.718 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.718 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.718 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.718 06:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.977 06:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:26.977 "name": "raid_bdev1", 00:17:26.977 "uuid": "e4154af5-839a-4399-9510-a59e201ad8bf", 00:17:26.977 "strip_size_kb": 0, 00:17:26.977 "state": "online", 00:17:26.977 "raid_level": "raid1", 00:17:26.977 "superblock": true, 00:17:26.977 "num_base_bdevs": 4, 00:17:26.977 "num_base_bdevs_discovered": 4, 00:17:26.977 "num_base_bdevs_operational": 4, 00:17:26.977 "base_bdevs_list": [ 00:17:26.977 { 00:17:26.977 "name": "BaseBdev1", 00:17:26.977 "uuid": "d80d30f5-8341-557c-94f7-8240bba53f63", 00:17:26.977 "is_configured": true, 00:17:26.977 "data_offset": 2048, 00:17:26.977 "data_size": 63488 00:17:26.977 }, 00:17:26.977 { 00:17:26.977 "name": "BaseBdev2", 00:17:26.977 "uuid": "47a9cd4a-57c0-534b-ba5e-e46ba24ca9aa", 00:17:26.977 "is_configured": true, 00:17:26.977 "data_offset": 2048, 00:17:26.977 "data_size": 63488 00:17:26.977 }, 00:17:26.977 { 00:17:26.977 "name": "BaseBdev3", 00:17:26.977 "uuid": "0c695c91-cc1f-5921-a062-3a58b7dbf55d", 00:17:26.977 "is_configured": true, 00:17:26.977 "data_offset": 2048, 00:17:26.977 "data_size": 63488 00:17:26.977 }, 00:17:26.977 { 00:17:26.977 "name": "BaseBdev4", 00:17:26.977 "uuid": "9aca4f3c-2bd0-5d30-8fd0-70ff781b4825", 00:17:26.977 "is_configured": true, 00:17:26.977 "data_offset": 2048, 00:17:26.977 "data_size": 63488 00:17:26.977 } 00:17:26.977 ] 00:17:26.977 }' 00:17:26.977 06:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:26.977 06:48:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.546 06:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:27.546 06:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:17:27.546 [2024-08-14 06:48:54.762650] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:17:28.486 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.746 06:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.006 06:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.006 "name": "raid_bdev1", 00:17:29.006 "uuid": "e4154af5-839a-4399-9510-a59e201ad8bf", 00:17:29.006 "strip_size_kb": 0, 00:17:29.006 "state": "online", 00:17:29.006 "raid_level": "raid1", 00:17:29.006 "superblock": true, 00:17:29.006 "num_base_bdevs": 4, 00:17:29.006 "num_base_bdevs_discovered": 4, 00:17:29.006 "num_base_bdevs_operational": 4, 00:17:29.006 "base_bdevs_list": [ 00:17:29.006 { 00:17:29.006 "name": "BaseBdev1", 00:17:29.006 "uuid": "d80d30f5-8341-557c-94f7-8240bba53f63", 00:17:29.006 "is_configured": true, 00:17:29.006 "data_offset": 2048, 00:17:29.006 "data_size": 63488 00:17:29.006 }, 00:17:29.006 { 00:17:29.006 "name": "BaseBdev2", 00:17:29.006 "uuid": "47a9cd4a-57c0-534b-ba5e-e46ba24ca9aa", 00:17:29.006 "is_configured": true, 00:17:29.006 "data_offset": 2048, 00:17:29.006 "data_size": 63488 00:17:29.006 }, 00:17:29.006 { 00:17:29.006 "name": "BaseBdev3", 00:17:29.006 "uuid": "0c695c91-cc1f-5921-a062-3a58b7dbf55d", 00:17:29.006 "is_configured": true, 00:17:29.006 "data_offset": 2048, 00:17:29.006 "data_size": 63488 00:17:29.006 }, 00:17:29.006 { 00:17:29.006 "name": "BaseBdev4", 00:17:29.006 "uuid": "9aca4f3c-2bd0-5d30-8fd0-70ff781b4825", 00:17:29.006 "is_configured": true, 00:17:29.006 "data_offset": 2048, 00:17:29.006 "data_size": 63488 00:17:29.006 } 00:17:29.006 ] 00:17:29.006 }' 00:17:29.006 06:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.006 06:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.576 06:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:29.836 [2024-08-14 06:48:56.924219] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.836 [2024-08-14 06:48:56.924385] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.836 [2024-08-14 06:48:56.927146] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.836 [2024-08-14 06:48:56.927268] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.836 [2024-08-14 06:48:56.927426] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.836 [2024-08-14 06:48:56.927496] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:17:29.836 0 00:17:29.836 06:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 92149 00:17:29.836 06:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 92149 ']' 00:17:29.836 06:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 92149 00:17:29.836 06:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:17:29.836 06:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:29.836 06:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92149 00:17:29.836 06:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:29.836 06:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:29.836 06:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92149' 00:17:29.836 killing process with pid 92149 00:17:29.836 06:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 92149 00:17:29.836 [2024-08-14 06:48:56.989982] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:29.836 06:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 92149 00:17:29.836 [2024-08-14 06:48:57.060076] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:30.432 06:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.R6u9Xlf1Wy 00:17:30.432 06:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:17:30.432 06:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:17:30.432 06:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:17:30.432 ************************************ 00:17:30.432 END TEST raid_read_error_test 00:17:30.432 ************************************ 00:17:30.432 06:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:17:30.432 06:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:30.432 06:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:30.432 06:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:30.432 00:17:30.432 real 0m7.346s 00:17:30.432 user 0m11.480s 00:17:30.432 sys 0m1.145s 00:17:30.432 06:48:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:30.432 06:48:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.432 06:48:57 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:17:30.432 06:48:57 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:30.432 06:48:57 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:30.432 06:48:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:30.432 ************************************ 00:17:30.432 START TEST raid_write_error_test 00:17:30.432 ************************************ 00:17:30.432 06:48:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 4 write 00:17:30.432 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:17:30.432 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:17:30.432 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:17:30.432 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:17:30.432 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:30.432 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:17:30.432 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:30.432 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.35NWKnyOov 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=92339 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 92339 /var/tmp/spdk-raid.sock 00:17:30.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 92339 ']' 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:30.433 06:48:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.433 [2024-08-14 06:48:57.621200] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:17:30.433 [2024-08-14 06:48:57.621435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92339 ] 00:17:30.704 [2024-08-14 06:48:57.749961] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.704 [2024-08-14 06:48:57.832229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.704 [2024-08-14 06:48:57.910068] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.704 [2024-08-14 06:48:57.910280] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.275 06:48:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:31.275 06:48:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:17:31.275 06:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:31.275 06:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:31.537 BaseBdev1_malloc 00:17:31.537 06:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:31.797 true 00:17:31.797 06:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:32.056 [2024-08-14 06:48:59.162146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:32.056 [2024-08-14 06:48:59.162378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.056 [2024-08-14 06:48:59.162425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:17:32.056 [2024-08-14 06:48:59.162462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.056 [2024-08-14 06:48:59.165238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.056 [2024-08-14 06:48:59.165341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:32.056 BaseBdev1 00:17:32.056 06:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:32.056 06:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:32.316 BaseBdev2_malloc 00:17:32.316 06:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:32.576 true 00:17:32.576 06:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:32.576 [2024-08-14 06:48:59.792926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:32.576 [2024-08-14 06:48:59.793159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.576 [2024-08-14 06:48:59.793208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:32.576 [2024-08-14 06:48:59.793221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.576 [2024-08-14 06:48:59.795936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.576 [2024-08-14 06:48:59.795986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:32.576 BaseBdev2 00:17:32.576 06:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:32.576 06:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:32.836 BaseBdev3_malloc 00:17:32.836 06:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:33.095 true 00:17:33.095 06:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:33.354 [2024-08-14 06:49:00.446681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:33.354 [2024-08-14 06:49:00.446883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.354 [2024-08-14 06:49:00.446919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:17:33.354 [2024-08-14 06:49:00.446932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.354 [2024-08-14 06:49:00.449641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.354 [2024-08-14 06:49:00.449696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:33.354 BaseBdev3 00:17:33.354 06:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:33.354 06:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:33.612 BaseBdev4_malloc 00:17:33.612 06:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:33.872 true 00:17:33.872 06:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:33.872 [2024-08-14 06:49:01.064955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:33.872 [2024-08-14 06:49:01.065195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.872 [2024-08-14 06:49:01.065231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:33.872 [2024-08-14 06:49:01.065248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.872 [2024-08-14 06:49:01.068069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.872 [2024-08-14 06:49:01.068119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:33.872 BaseBdev4 00:17:33.872 06:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:34.132 [2024-08-14 06:49:01.288726] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.132 [2024-08-14 06:49:01.291228] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.132 [2024-08-14 06:49:01.291338] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:34.132 [2024-08-14 06:49:01.291410] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:34.132 [2024-08-14 06:49:01.291659] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:17:34.132 [2024-08-14 06:49:01.291675] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:34.132 [2024-08-14 06:49:01.292027] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:17:34.132 [2024-08-14 06:49:01.292337] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:17:34.132 [2024-08-14 06:49:01.292376] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:17:34.132 [2024-08-14 06:49:01.292630] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.132 06:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:34.132 06:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:34.132 06:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:34.132 06:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:34.132 06:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:34.132 06:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:34.132 06:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.132 06:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.132 06:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.132 06:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.132 06:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.132 06:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.391 06:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.391 "name": "raid_bdev1", 00:17:34.391 "uuid": "c00e9963-1acc-408f-b34f-72b34909042c", 00:17:34.391 "strip_size_kb": 0, 00:17:34.391 "state": "online", 00:17:34.391 "raid_level": "raid1", 00:17:34.391 "superblock": true, 00:17:34.391 "num_base_bdevs": 4, 00:17:34.391 "num_base_bdevs_discovered": 4, 00:17:34.391 "num_base_bdevs_operational": 4, 00:17:34.391 "base_bdevs_list": [ 00:17:34.391 { 00:17:34.391 "name": "BaseBdev1", 00:17:34.391 "uuid": "8038b1bf-4420-5a64-be83-687df1f10597", 00:17:34.391 "is_configured": true, 00:17:34.391 "data_offset": 2048, 00:17:34.391 "data_size": 63488 00:17:34.391 }, 00:17:34.391 { 00:17:34.391 "name": "BaseBdev2", 00:17:34.391 "uuid": "fb319f52-750d-5e55-80fe-445427821d03", 00:17:34.391 "is_configured": true, 00:17:34.391 "data_offset": 2048, 00:17:34.391 "data_size": 63488 00:17:34.391 }, 00:17:34.391 { 00:17:34.391 "name": "BaseBdev3", 00:17:34.391 "uuid": "af1a7768-68f3-5d2e-9bae-580e454cbc13", 00:17:34.391 "is_configured": true, 00:17:34.391 "data_offset": 2048, 00:17:34.391 "data_size": 63488 00:17:34.391 }, 00:17:34.391 { 00:17:34.391 "name": "BaseBdev4", 00:17:34.391 "uuid": "d938a6cb-29ed-575c-83f7-befcb1b50bf8", 00:17:34.391 "is_configured": true, 00:17:34.391 "data_offset": 2048, 00:17:34.391 "data_size": 63488 00:17:34.391 } 00:17:34.391 ] 00:17:34.391 }' 00:17:34.391 06:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.391 06:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.961 06:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:17:34.961 06:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:35.220 [2024-08-14 06:49:02.237046] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:36.158 [2024-08-14 06:49:03.337657] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:36.158 [2024-08-14 06:49:03.337778] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:36.158 [2024-08-14 06:49:03.338053] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=3 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.158 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.418 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.418 "name": "raid_bdev1", 00:17:36.418 "uuid": "c00e9963-1acc-408f-b34f-72b34909042c", 00:17:36.418 "strip_size_kb": 0, 00:17:36.418 "state": "online", 00:17:36.418 "raid_level": "raid1", 00:17:36.418 "superblock": true, 00:17:36.418 "num_base_bdevs": 4, 00:17:36.418 "num_base_bdevs_discovered": 3, 00:17:36.418 "num_base_bdevs_operational": 3, 00:17:36.418 "base_bdevs_list": [ 00:17:36.418 { 00:17:36.418 "name": null, 00:17:36.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.418 "is_configured": false, 00:17:36.418 "data_offset": 2048, 00:17:36.418 "data_size": 63488 00:17:36.418 }, 00:17:36.418 { 00:17:36.418 "name": "BaseBdev2", 00:17:36.418 "uuid": "fb319f52-750d-5e55-80fe-445427821d03", 00:17:36.418 "is_configured": true, 00:17:36.418 "data_offset": 2048, 00:17:36.418 "data_size": 63488 00:17:36.418 }, 00:17:36.418 { 00:17:36.418 "name": "BaseBdev3", 00:17:36.418 "uuid": "af1a7768-68f3-5d2e-9bae-580e454cbc13", 00:17:36.418 "is_configured": true, 00:17:36.418 "data_offset": 2048, 00:17:36.418 "data_size": 63488 00:17:36.418 }, 00:17:36.418 { 00:17:36.418 "name": "BaseBdev4", 00:17:36.418 "uuid": "d938a6cb-29ed-575c-83f7-befcb1b50bf8", 00:17:36.418 "is_configured": true, 00:17:36.418 "data_offset": 2048, 00:17:36.418 "data_size": 63488 00:17:36.418 } 00:17:36.418 ] 00:17:36.418 }' 00:17:36.418 06:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.418 06:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.986 06:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:37.245 [2024-08-14 06:49:04.327302] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.245 [2024-08-14 06:49:04.327361] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:37.245 [2024-08-14 06:49:04.329889] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.245 0 00:17:37.246 [2024-08-14 06:49:04.330047] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.246 [2024-08-14 06:49:04.330211] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.246 [2024-08-14 06:49:04.330227] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:17:37.246 06:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 92339 00:17:37.246 06:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 92339 ']' 00:17:37.246 06:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 92339 00:17:37.246 06:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:17:37.246 06:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:37.246 06:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92339 00:17:37.246 06:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:37.246 06:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:37.246 06:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92339' 00:17:37.246 killing process with pid 92339 00:17:37.246 06:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 92339 00:17:37.246 [2024-08-14 06:49:04.383470] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:37.246 06:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 92339 00:17:37.246 [2024-08-14 06:49:04.453485] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.815 06:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:17:37.815 06:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:17:37.815 06:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.35NWKnyOov 00:17:37.815 06:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:17:37.815 06:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:17:37.815 06:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:37.815 06:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:37.815 06:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:37.815 00:17:37.815 real 0m7.314s 00:17:37.815 user 0m11.511s 00:17:37.815 sys 0m1.092s 00:17:37.815 06:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:37.815 06:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.815 ************************************ 00:17:37.815 END TEST raid_write_error_test 00:17:37.815 ************************************ 00:17:37.815 06:49:04 bdev_raid -- bdev/bdev_raid.sh@955 -- # '[' true = true ']' 00:17:37.815 06:49:04 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:17:37.815 06:49:04 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:17:37.815 06:49:04 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:17:37.815 06:49:04 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:37.815 06:49:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:37.815 ************************************ 00:17:37.815 START TEST raid_rebuild_test 00:17:37.815 ************************************ 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 false false true 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=92521 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 92521 /var/tmp/spdk-raid.sock 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 92521 ']' 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:37.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:37.815 06:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.815 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:37.815 Zero copy mechanism will not be used. 00:17:37.815 [2024-08-14 06:49:05.000683] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:17:37.816 [2024-08-14 06:49:05.000858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92521 ] 00:17:38.074 [2024-08-14 06:49:05.154124] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.074 [2024-08-14 06:49:05.235840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.074 [2024-08-14 06:49:05.314374] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.074 [2024-08-14 06:49:05.314437] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.642 06:49:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:38.642 06:49:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:17:38.642 06:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:17:38.642 06:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:38.901 BaseBdev1_malloc 00:17:38.901 06:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:39.160 [2024-08-14 06:49:06.267752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:39.160 [2024-08-14 06:49:06.267984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.160 [2024-08-14 06:49:06.268021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:39.160 [2024-08-14 06:49:06.268046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.160 [2024-08-14 06:49:06.270756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.160 [2024-08-14 06:49:06.270818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:39.160 BaseBdev1 00:17:39.160 06:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:17:39.160 06:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:39.420 BaseBdev2_malloc 00:17:39.420 06:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:39.679 [2024-08-14 06:49:06.686503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:39.679 [2024-08-14 06:49:06.686643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.680 [2024-08-14 06:49:06.686678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:39.680 [2024-08-14 06:49:06.686692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.680 [2024-08-14 06:49:06.689448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.680 [2024-08-14 06:49:06.689592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:39.680 BaseBdev2 00:17:39.680 06:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:17:39.680 spare_malloc 00:17:39.940 06:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:39.940 spare_delay 00:17:39.940 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:17:40.200 [2024-08-14 06:49:07.339920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:40.200 [2024-08-14 06:49:07.340040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.200 [2024-08-14 06:49:07.340073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:40.200 [2024-08-14 06:49:07.340086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.200 [2024-08-14 06:49:07.342832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.200 [2024-08-14 06:49:07.342889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:40.200 spare 00:17:40.200 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:17:40.460 [2024-08-14 06:49:07.531674] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.460 [2024-08-14 06:49:07.534078] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:40.460 [2024-08-14 06:49:07.534244] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:17:40.460 [2024-08-14 06:49:07.534271] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:40.460 [2024-08-14 06:49:07.534646] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:40.460 [2024-08-14 06:49:07.534859] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:17:40.460 [2024-08-14 06:49:07.534870] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:17:40.460 [2024-08-14 06:49:07.535057] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.460 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:40.460 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:40.460 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:40.460 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:40.460 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:40.460 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:40.460 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.460 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.460 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.460 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.460 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.460 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.720 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.720 "name": "raid_bdev1", 00:17:40.720 "uuid": "f853dbbe-cd74-46c5-b937-950178bdce39", 00:17:40.720 "strip_size_kb": 0, 00:17:40.720 "state": "online", 00:17:40.720 "raid_level": "raid1", 00:17:40.720 "superblock": false, 00:17:40.720 "num_base_bdevs": 2, 00:17:40.720 "num_base_bdevs_discovered": 2, 00:17:40.720 "num_base_bdevs_operational": 2, 00:17:40.720 "base_bdevs_list": [ 00:17:40.720 { 00:17:40.720 "name": "BaseBdev1", 00:17:40.720 "uuid": "104cdaf5-e89f-58da-ac41-2ec82e63ca8b", 00:17:40.720 "is_configured": true, 00:17:40.720 "data_offset": 0, 00:17:40.720 "data_size": 65536 00:17:40.720 }, 00:17:40.720 { 00:17:40.720 "name": "BaseBdev2", 00:17:40.720 "uuid": "4b1be76b-8c24-5d1c-80d9-0a4e62d3b594", 00:17:40.720 "is_configured": true, 00:17:40.720 "data_offset": 0, 00:17:40.720 "data_size": 65536 00:17:40.720 } 00:17:40.720 ] 00:17:40.720 }' 00:17:40.720 06:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.720 06:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.290 06:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:41.290 06:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:17:41.290 [2024-08-14 06:49:08.518265] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:41.290 06:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:17:41.290 06:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.290 06:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:41.550 06:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:17:41.550 06:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:17:41.550 06:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:17:41.550 06:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:17:41.550 06:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:17:41.550 06:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:41.550 06:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:41.550 06:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:41.550 06:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:41.550 06:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:41.550 06:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:41.550 06:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:41.550 06:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:41.550 06:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:41.810 [2024-08-14 06:49:08.953516] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:17:41.810 /dev/nbd0 00:17:41.810 06:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:41.810 06:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:41.810 06:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:17:41.810 06:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:17:41.810 06:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:17:41.810 06:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:17:41.810 06:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:17:41.810 06:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:17:41.810 06:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:17:41.810 06:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:17:41.810 06:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:41.810 1+0 records in 00:17:41.810 1+0 records out 00:17:41.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518635 s, 7.9 MB/s 00:17:41.810 06:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.810 06:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:17:41.810 06:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.810 06:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:17:41.810 06:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:17:41.810 06:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:41.810 06:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:41.810 06:49:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:17:41.810 06:49:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:17:41.810 06:49:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:17:47.091 65536+0 records in 00:17:47.091 65536+0 records out 00:17:47.091 33554432 bytes (34 MB, 32 MiB) copied, 4.77319 s, 7.0 MB/s 00:17:47.091 06:49:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:17:47.091 06:49:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:47.091 06:49:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:47.091 06:49:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:47.091 06:49:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:47.091 06:49:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.091 06:49:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:47.091 [2024-08-14 06:49:14.020502] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:17:47.091 [2024-08-14 06:49:14.236368] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.091 06:49:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.351 06:49:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.351 "name": "raid_bdev1", 00:17:47.351 "uuid": "f853dbbe-cd74-46c5-b937-950178bdce39", 00:17:47.351 "strip_size_kb": 0, 00:17:47.351 "state": "online", 00:17:47.351 "raid_level": "raid1", 00:17:47.351 "superblock": false, 00:17:47.351 "num_base_bdevs": 2, 00:17:47.351 "num_base_bdevs_discovered": 1, 00:17:47.351 "num_base_bdevs_operational": 1, 00:17:47.351 "base_bdevs_list": [ 00:17:47.351 { 00:17:47.351 "name": null, 00:17:47.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.351 "is_configured": false, 00:17:47.351 "data_offset": 0, 00:17:47.351 "data_size": 65536 00:17:47.351 }, 00:17:47.351 { 00:17:47.351 "name": "BaseBdev2", 00:17:47.351 "uuid": "4b1be76b-8c24-5d1c-80d9-0a4e62d3b594", 00:17:47.351 "is_configured": true, 00:17:47.351 "data_offset": 0, 00:17:47.351 "data_size": 65536 00:17:47.351 } 00:17:47.351 ] 00:17:47.351 }' 00:17:47.351 06:49:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.351 06:49:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.921 06:49:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:48.180 [2024-08-14 06:49:15.246640] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:48.180 [2024-08-14 06:49:15.254582] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:17:48.180 [2024-08-14 06:49:15.257012] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:48.180 06:49:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:17:49.120 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.120 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:49.120 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:49.120 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:49.120 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:49.120 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.120 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.380 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:49.380 "name": "raid_bdev1", 00:17:49.380 "uuid": "f853dbbe-cd74-46c5-b937-950178bdce39", 00:17:49.380 "strip_size_kb": 0, 00:17:49.380 "state": "online", 00:17:49.380 "raid_level": "raid1", 00:17:49.380 "superblock": false, 00:17:49.380 "num_base_bdevs": 2, 00:17:49.380 "num_base_bdevs_discovered": 2, 00:17:49.380 "num_base_bdevs_operational": 2, 00:17:49.380 "process": { 00:17:49.380 "type": "rebuild", 00:17:49.380 "target": "spare", 00:17:49.380 "progress": { 00:17:49.380 "blocks": 24576, 00:17:49.380 "percent": 37 00:17:49.380 } 00:17:49.380 }, 00:17:49.380 "base_bdevs_list": [ 00:17:49.380 { 00:17:49.380 "name": "spare", 00:17:49.380 "uuid": "5697d9aa-69c5-579a-bc4e-87d8d852ccea", 00:17:49.380 "is_configured": true, 00:17:49.380 "data_offset": 0, 00:17:49.380 "data_size": 65536 00:17:49.380 }, 00:17:49.380 { 00:17:49.380 "name": "BaseBdev2", 00:17:49.380 "uuid": "4b1be76b-8c24-5d1c-80d9-0a4e62d3b594", 00:17:49.380 "is_configured": true, 00:17:49.380 "data_offset": 0, 00:17:49.380 "data_size": 65536 00:17:49.380 } 00:17:49.380 ] 00:17:49.380 }' 00:17:49.380 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:49.380 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:49.380 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:49.380 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.380 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:17:49.640 [2024-08-14 06:49:16.762309] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.640 [2024-08-14 06:49:16.770064] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:49.640 [2024-08-14 06:49:16.770180] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.640 [2024-08-14 06:49:16.770200] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.640 [2024-08-14 06:49:16.770215] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:49.640 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.640 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:49.640 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:49.640 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:49.640 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:49.640 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:49.640 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.640 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.640 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.640 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.640 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.640 06:49:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.901 06:49:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.901 "name": "raid_bdev1", 00:17:49.901 "uuid": "f853dbbe-cd74-46c5-b937-950178bdce39", 00:17:49.901 "strip_size_kb": 0, 00:17:49.901 "state": "online", 00:17:49.901 "raid_level": "raid1", 00:17:49.901 "superblock": false, 00:17:49.901 "num_base_bdevs": 2, 00:17:49.901 "num_base_bdevs_discovered": 1, 00:17:49.901 "num_base_bdevs_operational": 1, 00:17:49.901 "base_bdevs_list": [ 00:17:49.901 { 00:17:49.901 "name": null, 00:17:49.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.901 "is_configured": false, 00:17:49.901 "data_offset": 0, 00:17:49.901 "data_size": 65536 00:17:49.901 }, 00:17:49.901 { 00:17:49.901 "name": "BaseBdev2", 00:17:49.901 "uuid": "4b1be76b-8c24-5d1c-80d9-0a4e62d3b594", 00:17:49.901 "is_configured": true, 00:17:49.901 "data_offset": 0, 00:17:49.901 "data_size": 65536 00:17:49.901 } 00:17:49.901 ] 00:17:49.901 }' 00:17:49.901 06:49:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.901 06:49:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.471 06:49:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:50.471 06:49:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:50.471 06:49:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:50.471 06:49:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:50.471 06:49:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:50.471 06:49:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.471 06:49:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.731 06:49:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.731 "name": "raid_bdev1", 00:17:50.731 "uuid": "f853dbbe-cd74-46c5-b937-950178bdce39", 00:17:50.731 "strip_size_kb": 0, 00:17:50.731 "state": "online", 00:17:50.731 "raid_level": "raid1", 00:17:50.731 "superblock": false, 00:17:50.731 "num_base_bdevs": 2, 00:17:50.731 "num_base_bdevs_discovered": 1, 00:17:50.731 "num_base_bdevs_operational": 1, 00:17:50.731 "base_bdevs_list": [ 00:17:50.731 { 00:17:50.731 "name": null, 00:17:50.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.731 "is_configured": false, 00:17:50.731 "data_offset": 0, 00:17:50.731 "data_size": 65536 00:17:50.731 }, 00:17:50.731 { 00:17:50.731 "name": "BaseBdev2", 00:17:50.731 "uuid": "4b1be76b-8c24-5d1c-80d9-0a4e62d3b594", 00:17:50.731 "is_configured": true, 00:17:50.731 "data_offset": 0, 00:17:50.731 "data_size": 65536 00:17:50.731 } 00:17:50.731 ] 00:17:50.731 }' 00:17:50.731 06:49:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:50.731 06:49:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:50.731 06:49:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:50.731 06:49:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:50.731 06:49:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:50.991 [2024-08-14 06:49:18.108353] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.991 [2024-08-14 06:49:18.116002] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d062f0 00:17:50.991 [2024-08-14 06:49:18.118682] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:50.991 06:49:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:17:51.929 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.929 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:51.929 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:51.929 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:51.929 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:51.929 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.929 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.188 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:52.188 "name": "raid_bdev1", 00:17:52.188 "uuid": "f853dbbe-cd74-46c5-b937-950178bdce39", 00:17:52.188 "strip_size_kb": 0, 00:17:52.188 "state": "online", 00:17:52.188 "raid_level": "raid1", 00:17:52.188 "superblock": false, 00:17:52.188 "num_base_bdevs": 2, 00:17:52.188 "num_base_bdevs_discovered": 2, 00:17:52.188 "num_base_bdevs_operational": 2, 00:17:52.188 "process": { 00:17:52.188 "type": "rebuild", 00:17:52.188 "target": "spare", 00:17:52.188 "progress": { 00:17:52.188 "blocks": 24576, 00:17:52.188 "percent": 37 00:17:52.188 } 00:17:52.188 }, 00:17:52.188 "base_bdevs_list": [ 00:17:52.188 { 00:17:52.188 "name": "spare", 00:17:52.188 "uuid": "5697d9aa-69c5-579a-bc4e-87d8d852ccea", 00:17:52.188 "is_configured": true, 00:17:52.188 "data_offset": 0, 00:17:52.188 "data_size": 65536 00:17:52.188 }, 00:17:52.188 { 00:17:52.188 "name": "BaseBdev2", 00:17:52.188 "uuid": "4b1be76b-8c24-5d1c-80d9-0a4e62d3b594", 00:17:52.188 "is_configured": true, 00:17:52.188 "data_offset": 0, 00:17:52.188 "data_size": 65536 00:17:52.188 } 00:17:52.188 ] 00:17:52.188 }' 00:17:52.188 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:52.188 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.188 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=718 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:52.447 "name": "raid_bdev1", 00:17:52.447 "uuid": "f853dbbe-cd74-46c5-b937-950178bdce39", 00:17:52.447 "strip_size_kb": 0, 00:17:52.447 "state": "online", 00:17:52.447 "raid_level": "raid1", 00:17:52.447 "superblock": false, 00:17:52.447 "num_base_bdevs": 2, 00:17:52.447 "num_base_bdevs_discovered": 2, 00:17:52.447 "num_base_bdevs_operational": 2, 00:17:52.447 "process": { 00:17:52.447 "type": "rebuild", 00:17:52.447 "target": "spare", 00:17:52.447 "progress": { 00:17:52.447 "blocks": 30720, 00:17:52.447 "percent": 46 00:17:52.447 } 00:17:52.447 }, 00:17:52.447 "base_bdevs_list": [ 00:17:52.447 { 00:17:52.447 "name": "spare", 00:17:52.447 "uuid": "5697d9aa-69c5-579a-bc4e-87d8d852ccea", 00:17:52.447 "is_configured": true, 00:17:52.447 "data_offset": 0, 00:17:52.447 "data_size": 65536 00:17:52.447 }, 00:17:52.447 { 00:17:52.447 "name": "BaseBdev2", 00:17:52.447 "uuid": "4b1be76b-8c24-5d1c-80d9-0a4e62d3b594", 00:17:52.447 "is_configured": true, 00:17:52.447 "data_offset": 0, 00:17:52.447 "data_size": 65536 00:17:52.447 } 00:17:52.447 ] 00:17:52.447 }' 00:17:52.447 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:52.706 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.706 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:52.706 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.706 06:49:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:17:53.644 06:49:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:17:53.644 06:49:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.644 06:49:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:53.644 06:49:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:53.644 06:49:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:53.644 06:49:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:53.645 06:49:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.645 06:49:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.904 06:49:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:53.904 "name": "raid_bdev1", 00:17:53.904 "uuid": "f853dbbe-cd74-46c5-b937-950178bdce39", 00:17:53.904 "strip_size_kb": 0, 00:17:53.904 "state": "online", 00:17:53.904 "raid_level": "raid1", 00:17:53.904 "superblock": false, 00:17:53.905 "num_base_bdevs": 2, 00:17:53.905 "num_base_bdevs_discovered": 2, 00:17:53.905 "num_base_bdevs_operational": 2, 00:17:53.905 "process": { 00:17:53.905 "type": "rebuild", 00:17:53.905 "target": "spare", 00:17:53.905 "progress": { 00:17:53.905 "blocks": 57344, 00:17:53.905 "percent": 87 00:17:53.905 } 00:17:53.905 }, 00:17:53.905 "base_bdevs_list": [ 00:17:53.905 { 00:17:53.905 "name": "spare", 00:17:53.905 "uuid": "5697d9aa-69c5-579a-bc4e-87d8d852ccea", 00:17:53.905 "is_configured": true, 00:17:53.905 "data_offset": 0, 00:17:53.905 "data_size": 65536 00:17:53.905 }, 00:17:53.905 { 00:17:53.905 "name": "BaseBdev2", 00:17:53.905 "uuid": "4b1be76b-8c24-5d1c-80d9-0a4e62d3b594", 00:17:53.905 "is_configured": true, 00:17:53.905 "data_offset": 0, 00:17:53.905 "data_size": 65536 00:17:53.905 } 00:17:53.905 ] 00:17:53.905 }' 00:17:53.905 06:49:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:53.905 06:49:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.905 06:49:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:53.905 06:49:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.905 06:49:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:17:54.164 [2024-08-14 06:49:21.343251] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:54.164 [2024-08-14 06:49:21.343476] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:54.164 [2024-08-14 06:49:21.343546] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.103 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:17:55.103 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.103 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:55.103 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:55.103 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:55.103 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:55.103 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.103 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.103 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:55.103 "name": "raid_bdev1", 00:17:55.103 "uuid": "f853dbbe-cd74-46c5-b937-950178bdce39", 00:17:55.103 "strip_size_kb": 0, 00:17:55.103 "state": "online", 00:17:55.103 "raid_level": "raid1", 00:17:55.103 "superblock": false, 00:17:55.103 "num_base_bdevs": 2, 00:17:55.103 "num_base_bdevs_discovered": 2, 00:17:55.103 "num_base_bdevs_operational": 2, 00:17:55.103 "base_bdevs_list": [ 00:17:55.103 { 00:17:55.103 "name": "spare", 00:17:55.103 "uuid": "5697d9aa-69c5-579a-bc4e-87d8d852ccea", 00:17:55.103 "is_configured": true, 00:17:55.103 "data_offset": 0, 00:17:55.103 "data_size": 65536 00:17:55.103 }, 00:17:55.103 { 00:17:55.103 "name": "BaseBdev2", 00:17:55.103 "uuid": "4b1be76b-8c24-5d1c-80d9-0a4e62d3b594", 00:17:55.103 "is_configured": true, 00:17:55.103 "data_offset": 0, 00:17:55.103 "data_size": 65536 00:17:55.103 } 00:17:55.103 ] 00:17:55.103 }' 00:17:55.103 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:55.103 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:55.103 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:55.362 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:17:55.362 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:17:55.362 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:55.362 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:55.362 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:55.362 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:55.362 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:55.362 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.362 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.362 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:55.362 "name": "raid_bdev1", 00:17:55.362 "uuid": "f853dbbe-cd74-46c5-b937-950178bdce39", 00:17:55.362 "strip_size_kb": 0, 00:17:55.362 "state": "online", 00:17:55.362 "raid_level": "raid1", 00:17:55.362 "superblock": false, 00:17:55.362 "num_base_bdevs": 2, 00:17:55.362 "num_base_bdevs_discovered": 2, 00:17:55.362 "num_base_bdevs_operational": 2, 00:17:55.362 "base_bdevs_list": [ 00:17:55.362 { 00:17:55.362 "name": "spare", 00:17:55.362 "uuid": "5697d9aa-69c5-579a-bc4e-87d8d852ccea", 00:17:55.362 "is_configured": true, 00:17:55.362 "data_offset": 0, 00:17:55.362 "data_size": 65536 00:17:55.362 }, 00:17:55.362 { 00:17:55.362 "name": "BaseBdev2", 00:17:55.362 "uuid": "4b1be76b-8c24-5d1c-80d9-0a4e62d3b594", 00:17:55.362 "is_configured": true, 00:17:55.362 "data_offset": 0, 00:17:55.362 "data_size": 65536 00:17:55.362 } 00:17:55.362 ] 00:17:55.362 }' 00:17:55.362 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:55.622 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:55.622 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:55.622 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:55.622 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:55.622 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:55.622 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:55.622 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:55.622 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:55.622 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:55.622 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.622 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.622 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.622 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.622 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.622 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.882 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.882 "name": "raid_bdev1", 00:17:55.882 "uuid": "f853dbbe-cd74-46c5-b937-950178bdce39", 00:17:55.882 "strip_size_kb": 0, 00:17:55.882 "state": "online", 00:17:55.882 "raid_level": "raid1", 00:17:55.882 "superblock": false, 00:17:55.882 "num_base_bdevs": 2, 00:17:55.882 "num_base_bdevs_discovered": 2, 00:17:55.882 "num_base_bdevs_operational": 2, 00:17:55.882 "base_bdevs_list": [ 00:17:55.882 { 00:17:55.882 "name": "spare", 00:17:55.882 "uuid": "5697d9aa-69c5-579a-bc4e-87d8d852ccea", 00:17:55.882 "is_configured": true, 00:17:55.882 "data_offset": 0, 00:17:55.882 "data_size": 65536 00:17:55.882 }, 00:17:55.882 { 00:17:55.882 "name": "BaseBdev2", 00:17:55.882 "uuid": "4b1be76b-8c24-5d1c-80d9-0a4e62d3b594", 00:17:55.882 "is_configured": true, 00:17:55.882 "data_offset": 0, 00:17:55.882 "data_size": 65536 00:17:55.882 } 00:17:55.882 ] 00:17:55.882 }' 00:17:55.882 06:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.882 06:49:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.451 06:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:56.451 [2024-08-14 06:49:23.627717] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.451 [2024-08-14 06:49:23.627887] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.451 [2024-08-14 06:49:23.628027] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.451 [2024-08-14 06:49:23.628138] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.451 [2024-08-14 06:49:23.628222] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:17:56.451 06:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:17:56.451 06:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.710 06:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:17:56.710 06:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:17:56.710 06:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:17:56.710 06:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:56.710 06:49:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:56.710 06:49:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:56.710 06:49:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:56.710 06:49:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:56.710 06:49:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:56.710 06:49:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:56.710 06:49:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:56.710 06:49:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:56.711 06:49:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:56.970 /dev/nbd0 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:56.970 1+0 records in 00:17:56.970 1+0 records out 00:17:56.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025063 s, 16.3 MB/s 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:56.970 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:17:57.230 /dev/nbd1 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:57.230 1+0 records in 00:17:57.230 1+0 records out 00:17:57.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296905 s, 13.8 MB/s 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:57.230 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:17:57.490 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:57.490 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:57.490 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:57.490 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:57.490 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:57.490 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:57.490 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:57.490 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:57.490 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:57.490 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 92521 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 92521 ']' 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 92521 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92521 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92521' 00:17:57.750 killing process with pid 92521 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@965 -- # kill 92521 00:17:57.750 Received shutdown signal, test time was about 60.000000 seconds 00:17:57.750 00:17:57.750 Latency(us) 00:17:57.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.750 =================================================================================================================== 00:17:57.750 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.750 [2024-08-14 06:49:24.906614] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:57.750 06:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # wait 92521 00:17:57.750 [2024-08-14 06:49:24.963224] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:17:58.319 00:17:58.319 real 0m20.428s 00:17:58.319 user 0m27.585s 00:17:58.319 sys 0m4.076s 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:58.319 ************************************ 00:17:58.319 END TEST raid_rebuild_test 00:17:58.319 ************************************ 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.319 06:49:25 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:17:58.319 06:49:25 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:17:58.319 06:49:25 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:58.319 06:49:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.319 ************************************ 00:17:58.319 START TEST raid_rebuild_test_sb 00:17:58.319 ************************************ 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false true 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=93002 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 93002 /var/tmp/spdk-raid.sock 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 93002 ']' 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:58.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:58.319 06:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.319 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:58.319 Zero copy mechanism will not be used. 00:17:58.319 [2024-08-14 06:49:25.507222] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:17:58.319 [2024-08-14 06:49:25.507374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93002 ] 00:17:58.579 [2024-08-14 06:49:25.656511] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.579 [2024-08-14 06:49:25.733012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.579 [2024-08-14 06:49:25.809369] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.579 [2024-08-14 06:49:25.809415] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.147 06:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:59.147 06:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:17:59.147 06:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:17:59.147 06:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:59.406 BaseBdev1_malloc 00:17:59.406 06:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:59.665 [2024-08-14 06:49:26.725191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:59.665 [2024-08-14 06:49:26.725406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.665 [2024-08-14 06:49:26.725447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:59.665 [2024-08-14 06:49:26.725461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.665 [2024-08-14 06:49:26.728148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.665 [2024-08-14 06:49:26.728209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:59.665 BaseBdev1 00:17:59.665 06:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:17:59.665 06:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:59.931 BaseBdev2_malloc 00:17:59.931 06:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:59.931 [2024-08-14 06:49:27.163557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:59.931 [2024-08-14 06:49:27.163669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.931 [2024-08-14 06:49:27.163700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:59.931 [2024-08-14 06:49:27.163713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.931 [2024-08-14 06:49:27.166569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.931 [2024-08-14 06:49:27.166623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:59.931 BaseBdev2 00:18:00.209 06:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:00.209 spare_malloc 00:18:00.209 06:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:00.484 spare_delay 00:18:00.484 06:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:00.743 [2024-08-14 06:49:27.823089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:00.743 [2024-08-14 06:49:27.823217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.743 [2024-08-14 06:49:27.823251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:00.743 [2024-08-14 06:49:27.823265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.743 [2024-08-14 06:49:27.825899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.743 [2024-08-14 06:49:27.825945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:00.743 spare 00:18:00.743 06:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:01.004 [2024-08-14 06:49:28.026892] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.004 [2024-08-14 06:49:28.029450] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.004 [2024-08-14 06:49:28.029693] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:18:01.004 [2024-08-14 06:49:28.029721] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:01.004 [2024-08-14 06:49:28.030144] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:18:01.004 [2024-08-14 06:49:28.030379] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:18:01.004 [2024-08-14 06:49:28.030399] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:18:01.004 [2024-08-14 06:49:28.030665] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.004 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:01.004 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:01.004 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:01.004 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:01.004 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:01.004 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:01.004 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.004 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.004 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.004 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.004 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.004 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.264 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:01.264 "name": "raid_bdev1", 00:18:01.264 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:01.264 "strip_size_kb": 0, 00:18:01.264 "state": "online", 00:18:01.264 "raid_level": "raid1", 00:18:01.264 "superblock": true, 00:18:01.264 "num_base_bdevs": 2, 00:18:01.264 "num_base_bdevs_discovered": 2, 00:18:01.264 "num_base_bdevs_operational": 2, 00:18:01.264 "base_bdevs_list": [ 00:18:01.264 { 00:18:01.264 "name": "BaseBdev1", 00:18:01.264 "uuid": "c374a945-478b-56dd-a4b3-661640fe6819", 00:18:01.264 "is_configured": true, 00:18:01.264 "data_offset": 2048, 00:18:01.264 "data_size": 63488 00:18:01.264 }, 00:18:01.264 { 00:18:01.264 "name": "BaseBdev2", 00:18:01.264 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:01.264 "is_configured": true, 00:18:01.264 "data_offset": 2048, 00:18:01.264 "data_size": 63488 00:18:01.264 } 00:18:01.264 ] 00:18:01.264 }' 00:18:01.264 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:01.264 06:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.833 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:18:01.833 06:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:01.833 [2024-08-14 06:49:29.053451] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.833 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:18:01.833 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.833 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:02.093 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:18:02.093 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:18:02.093 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:18:02.093 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:18:02.093 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:18:02.093 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:02.093 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:02.093 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:02.093 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:02.093 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:02.093 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:02.093 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:02.093 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:02.093 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:02.353 [2024-08-14 06:49:29.544454] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:18:02.353 /dev/nbd0 00:18:02.353 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:02.353 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:02.353 06:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:18:02.353 06:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:18:02.353 06:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:02.353 06:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:02.353 06:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:18:02.353 06:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:18:02.353 06:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:02.353 06:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:02.353 06:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.353 1+0 records in 00:18:02.353 1+0 records out 00:18:02.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342882 s, 11.9 MB/s 00:18:02.353 06:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.353 06:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:18:02.353 06:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.613 06:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:02.613 06:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:18:02.613 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:02.613 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:02.613 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:18:02.613 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:18:02.613 06:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:18:07.890 63488+0 records in 00:18:07.890 63488+0 records out 00:18:07.890 32505856 bytes (33 MB, 31 MiB) copied, 4.70813 s, 6.9 MB/s 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:07.890 [2024-08-14 06:49:34.551083] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:07.890 [2024-08-14 06:49:34.778867] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.890 06:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.890 06:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.890 "name": "raid_bdev1", 00:18:07.890 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:07.890 "strip_size_kb": 0, 00:18:07.890 "state": "online", 00:18:07.890 "raid_level": "raid1", 00:18:07.890 "superblock": true, 00:18:07.890 "num_base_bdevs": 2, 00:18:07.890 "num_base_bdevs_discovered": 1, 00:18:07.890 "num_base_bdevs_operational": 1, 00:18:07.890 "base_bdevs_list": [ 00:18:07.890 { 00:18:07.890 "name": null, 00:18:07.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.890 "is_configured": false, 00:18:07.890 "data_offset": 2048, 00:18:07.890 "data_size": 63488 00:18:07.890 }, 00:18:07.890 { 00:18:07.890 "name": "BaseBdev2", 00:18:07.890 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:07.890 "is_configured": true, 00:18:07.890 "data_offset": 2048, 00:18:07.890 "data_size": 63488 00:18:07.890 } 00:18:07.890 ] 00:18:07.890 }' 00:18:07.890 06:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.890 06:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.460 06:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:08.719 [2024-08-14 06:49:35.733991] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:08.719 [2024-08-14 06:49:35.741972] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:18:08.719 [2024-08-14 06:49:35.744268] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:08.719 06:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:09.658 06:49:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.658 06:49:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:09.658 06:49:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:09.658 06:49:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:09.658 06:49:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:09.658 06:49:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.658 06:49:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.917 06:49:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:09.917 "name": "raid_bdev1", 00:18:09.917 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:09.917 "strip_size_kb": 0, 00:18:09.917 "state": "online", 00:18:09.917 "raid_level": "raid1", 00:18:09.917 "superblock": true, 00:18:09.917 "num_base_bdevs": 2, 00:18:09.917 "num_base_bdevs_discovered": 2, 00:18:09.917 "num_base_bdevs_operational": 2, 00:18:09.917 "process": { 00:18:09.917 "type": "rebuild", 00:18:09.917 "target": "spare", 00:18:09.917 "progress": { 00:18:09.917 "blocks": 22528, 00:18:09.917 "percent": 35 00:18:09.917 } 00:18:09.917 }, 00:18:09.917 "base_bdevs_list": [ 00:18:09.917 { 00:18:09.917 "name": "spare", 00:18:09.917 "uuid": "0f1f2e89-fb00-5b4e-bb81-588ee897312e", 00:18:09.917 "is_configured": true, 00:18:09.917 "data_offset": 2048, 00:18:09.917 "data_size": 63488 00:18:09.917 }, 00:18:09.917 { 00:18:09.917 "name": "BaseBdev2", 00:18:09.917 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:09.917 "is_configured": true, 00:18:09.917 "data_offset": 2048, 00:18:09.917 "data_size": 63488 00:18:09.917 } 00:18:09.917 ] 00:18:09.917 }' 00:18:09.917 06:49:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:09.917 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.917 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:09.917 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.917 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:10.178 [2024-08-14 06:49:37.245154] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.178 [2024-08-14 06:49:37.256998] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:10.178 [2024-08-14 06:49:37.257092] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.178 [2024-08-14 06:49:37.257110] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.178 [2024-08-14 06:49:37.257124] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:10.178 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.178 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:10.178 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:10.178 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:10.178 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:10.178 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:10.178 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.178 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.178 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.178 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.178 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.178 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.438 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:10.438 "name": "raid_bdev1", 00:18:10.438 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:10.438 "strip_size_kb": 0, 00:18:10.438 "state": "online", 00:18:10.438 "raid_level": "raid1", 00:18:10.438 "superblock": true, 00:18:10.438 "num_base_bdevs": 2, 00:18:10.438 "num_base_bdevs_discovered": 1, 00:18:10.438 "num_base_bdevs_operational": 1, 00:18:10.438 "base_bdevs_list": [ 00:18:10.438 { 00:18:10.438 "name": null, 00:18:10.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.438 "is_configured": false, 00:18:10.438 "data_offset": 2048, 00:18:10.438 "data_size": 63488 00:18:10.438 }, 00:18:10.438 { 00:18:10.438 "name": "BaseBdev2", 00:18:10.438 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:10.438 "is_configured": true, 00:18:10.438 "data_offset": 2048, 00:18:10.438 "data_size": 63488 00:18:10.438 } 00:18:10.438 ] 00:18:10.438 }' 00:18:10.438 06:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:10.438 06:49:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.007 06:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.007 06:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:11.007 06:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:11.007 06:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:11.007 06:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:11.007 06:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.007 06:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.008 06:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:11.008 "name": "raid_bdev1", 00:18:11.008 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:11.008 "strip_size_kb": 0, 00:18:11.008 "state": "online", 00:18:11.008 "raid_level": "raid1", 00:18:11.008 "superblock": true, 00:18:11.008 "num_base_bdevs": 2, 00:18:11.008 "num_base_bdevs_discovered": 1, 00:18:11.008 "num_base_bdevs_operational": 1, 00:18:11.008 "base_bdevs_list": [ 00:18:11.008 { 00:18:11.008 "name": null, 00:18:11.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.008 "is_configured": false, 00:18:11.008 "data_offset": 2048, 00:18:11.008 "data_size": 63488 00:18:11.008 }, 00:18:11.008 { 00:18:11.008 "name": "BaseBdev2", 00:18:11.008 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:11.008 "is_configured": true, 00:18:11.008 "data_offset": 2048, 00:18:11.008 "data_size": 63488 00:18:11.008 } 00:18:11.008 ] 00:18:11.008 }' 00:18:11.008 06:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:11.266 06:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:11.266 06:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:11.266 06:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:11.266 06:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:11.266 [2024-08-14 06:49:38.499211] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:11.266 [2024-08-14 06:49:38.506813] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e350 00:18:11.266 [2024-08-14 06:49:38.509131] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:11.525 06:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:18:12.463 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.463 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:12.463 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:12.463 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:12.463 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:12.463 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.463 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:12.723 "name": "raid_bdev1", 00:18:12.723 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:12.723 "strip_size_kb": 0, 00:18:12.723 "state": "online", 00:18:12.723 "raid_level": "raid1", 00:18:12.723 "superblock": true, 00:18:12.723 "num_base_bdevs": 2, 00:18:12.723 "num_base_bdevs_discovered": 2, 00:18:12.723 "num_base_bdevs_operational": 2, 00:18:12.723 "process": { 00:18:12.723 "type": "rebuild", 00:18:12.723 "target": "spare", 00:18:12.723 "progress": { 00:18:12.723 "blocks": 22528, 00:18:12.723 "percent": 35 00:18:12.723 } 00:18:12.723 }, 00:18:12.723 "base_bdevs_list": [ 00:18:12.723 { 00:18:12.723 "name": "spare", 00:18:12.723 "uuid": "0f1f2e89-fb00-5b4e-bb81-588ee897312e", 00:18:12.723 "is_configured": true, 00:18:12.723 "data_offset": 2048, 00:18:12.723 "data_size": 63488 00:18:12.723 }, 00:18:12.723 { 00:18:12.723 "name": "BaseBdev2", 00:18:12.723 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:12.723 "is_configured": true, 00:18:12.723 "data_offset": 2048, 00:18:12.723 "data_size": 63488 00:18:12.723 } 00:18:12.723 ] 00:18:12.723 }' 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:18:12.723 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=738 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.723 06:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.981 06:49:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:12.981 "name": "raid_bdev1", 00:18:12.981 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:12.981 "strip_size_kb": 0, 00:18:12.981 "state": "online", 00:18:12.981 "raid_level": "raid1", 00:18:12.981 "superblock": true, 00:18:12.981 "num_base_bdevs": 2, 00:18:12.981 "num_base_bdevs_discovered": 2, 00:18:12.981 "num_base_bdevs_operational": 2, 00:18:12.981 "process": { 00:18:12.981 "type": "rebuild", 00:18:12.981 "target": "spare", 00:18:12.981 "progress": { 00:18:12.981 "blocks": 28672, 00:18:12.981 "percent": 45 00:18:12.981 } 00:18:12.981 }, 00:18:12.981 "base_bdevs_list": [ 00:18:12.981 { 00:18:12.981 "name": "spare", 00:18:12.981 "uuid": "0f1f2e89-fb00-5b4e-bb81-588ee897312e", 00:18:12.981 "is_configured": true, 00:18:12.981 "data_offset": 2048, 00:18:12.981 "data_size": 63488 00:18:12.981 }, 00:18:12.981 { 00:18:12.981 "name": "BaseBdev2", 00:18:12.981 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:12.981 "is_configured": true, 00:18:12.981 "data_offset": 2048, 00:18:12.981 "data_size": 63488 00:18:12.981 } 00:18:12.981 ] 00:18:12.981 }' 00:18:12.981 06:49:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:12.981 06:49:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.981 06:49:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:12.981 06:49:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.981 06:49:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:18:13.919 06:49:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:18:13.919 06:49:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.919 06:49:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:13.919 06:49:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:13.919 06:49:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:13.919 06:49:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:13.919 06:49:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.919 06:49:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.179 06:49:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:14.179 "name": "raid_bdev1", 00:18:14.179 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:14.179 "strip_size_kb": 0, 00:18:14.179 "state": "online", 00:18:14.179 "raid_level": "raid1", 00:18:14.179 "superblock": true, 00:18:14.179 "num_base_bdevs": 2, 00:18:14.179 "num_base_bdevs_discovered": 2, 00:18:14.179 "num_base_bdevs_operational": 2, 00:18:14.179 "process": { 00:18:14.179 "type": "rebuild", 00:18:14.179 "target": "spare", 00:18:14.179 "progress": { 00:18:14.179 "blocks": 55296, 00:18:14.179 "percent": 87 00:18:14.179 } 00:18:14.179 }, 00:18:14.179 "base_bdevs_list": [ 00:18:14.179 { 00:18:14.179 "name": "spare", 00:18:14.179 "uuid": "0f1f2e89-fb00-5b4e-bb81-588ee897312e", 00:18:14.179 "is_configured": true, 00:18:14.179 "data_offset": 2048, 00:18:14.179 "data_size": 63488 00:18:14.179 }, 00:18:14.179 { 00:18:14.179 "name": "BaseBdev2", 00:18:14.179 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:14.179 "is_configured": true, 00:18:14.179 "data_offset": 2048, 00:18:14.179 "data_size": 63488 00:18:14.179 } 00:18:14.179 ] 00:18:14.179 }' 00:18:14.179 06:49:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:14.179 06:49:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.179 06:49:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:14.179 06:49:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.179 06:49:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:18:14.438 [2024-08-14 06:49:41.634638] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:14.438 [2024-08-14 06:49:41.634891] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:14.438 [2024-08-14 06:49:41.635068] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.376 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:18:15.376 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.376 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:15.376 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:15.376 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:15.376 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:15.376 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.376 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.376 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:15.376 "name": "raid_bdev1", 00:18:15.376 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:15.376 "strip_size_kb": 0, 00:18:15.376 "state": "online", 00:18:15.376 "raid_level": "raid1", 00:18:15.376 "superblock": true, 00:18:15.376 "num_base_bdevs": 2, 00:18:15.376 "num_base_bdevs_discovered": 2, 00:18:15.376 "num_base_bdevs_operational": 2, 00:18:15.376 "base_bdevs_list": [ 00:18:15.376 { 00:18:15.376 "name": "spare", 00:18:15.376 "uuid": "0f1f2e89-fb00-5b4e-bb81-588ee897312e", 00:18:15.376 "is_configured": true, 00:18:15.376 "data_offset": 2048, 00:18:15.376 "data_size": 63488 00:18:15.376 }, 00:18:15.376 { 00:18:15.376 "name": "BaseBdev2", 00:18:15.376 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:15.376 "is_configured": true, 00:18:15.376 "data_offset": 2048, 00:18:15.376 "data_size": 63488 00:18:15.376 } 00:18:15.376 ] 00:18:15.376 }' 00:18:15.376 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:15.636 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:15.636 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:15.636 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:18:15.636 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:18:15.636 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:15.636 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:15.636 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:15.636 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:15.636 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:15.636 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.636 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:15.895 "name": "raid_bdev1", 00:18:15.895 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:15.895 "strip_size_kb": 0, 00:18:15.895 "state": "online", 00:18:15.895 "raid_level": "raid1", 00:18:15.895 "superblock": true, 00:18:15.895 "num_base_bdevs": 2, 00:18:15.895 "num_base_bdevs_discovered": 2, 00:18:15.895 "num_base_bdevs_operational": 2, 00:18:15.895 "base_bdevs_list": [ 00:18:15.895 { 00:18:15.895 "name": "spare", 00:18:15.895 "uuid": "0f1f2e89-fb00-5b4e-bb81-588ee897312e", 00:18:15.895 "is_configured": true, 00:18:15.895 "data_offset": 2048, 00:18:15.895 "data_size": 63488 00:18:15.895 }, 00:18:15.895 { 00:18:15.895 "name": "BaseBdev2", 00:18:15.895 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:15.895 "is_configured": true, 00:18:15.895 "data_offset": 2048, 00:18:15.895 "data_size": 63488 00:18:15.895 } 00:18:15.895 ] 00:18:15.895 }' 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.895 06:49:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.154 06:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.154 "name": "raid_bdev1", 00:18:16.154 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:16.154 "strip_size_kb": 0, 00:18:16.154 "state": "online", 00:18:16.154 "raid_level": "raid1", 00:18:16.154 "superblock": true, 00:18:16.154 "num_base_bdevs": 2, 00:18:16.154 "num_base_bdevs_discovered": 2, 00:18:16.154 "num_base_bdevs_operational": 2, 00:18:16.154 "base_bdevs_list": [ 00:18:16.154 { 00:18:16.154 "name": "spare", 00:18:16.154 "uuid": "0f1f2e89-fb00-5b4e-bb81-588ee897312e", 00:18:16.154 "is_configured": true, 00:18:16.154 "data_offset": 2048, 00:18:16.154 "data_size": 63488 00:18:16.154 }, 00:18:16.154 { 00:18:16.154 "name": "BaseBdev2", 00:18:16.154 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:16.154 "is_configured": true, 00:18:16.154 "data_offset": 2048, 00:18:16.154 "data_size": 63488 00:18:16.154 } 00:18:16.154 ] 00:18:16.154 }' 00:18:16.154 06:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.154 06:49:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.722 06:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:16.722 [2024-08-14 06:49:43.900579] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:16.722 [2024-08-14 06:49:43.900635] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.722 [2024-08-14 06:49:43.900761] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.722 [2024-08-14 06:49:43.900846] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.722 [2024-08-14 06:49:43.900858] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:18:16.722 06:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.722 06:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:18:16.981 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:18:16.981 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:18:16.981 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:18:16.981 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:16.981 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:16.981 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:16.981 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:16.981 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:16.981 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:16.981 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:16.981 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:16.981 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:16.981 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:17.240 /dev/nbd0 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.240 1+0 records in 00:18:17.240 1+0 records out 00:18:17.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003703 s, 11.1 MB/s 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:17.240 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:18:17.501 /dev/nbd1 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.501 1+0 records in 00:18:17.501 1+0 records out 00:18:17.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588283 s, 7.0 MB/s 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:17.501 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:17.761 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:17.761 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:17.761 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:17.761 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:17.761 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:17.761 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:17.761 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:17.761 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:17.761 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:17.761 06:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:18.021 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:18.021 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:18.021 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:18.021 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:18.021 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:18.021 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:18.021 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:18.021 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:18.021 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:18:18.021 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:18.280 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:18.280 [2024-08-14 06:49:45.519456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:18.280 [2024-08-14 06:49:45.519568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.280 [2024-08-14 06:49:45.519602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:18.280 [2024-08-14 06:49:45.519614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.280 [2024-08-14 06:49:45.522590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.280 spare 00:18:18.280 [2024-08-14 06:49:45.522722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:18.280 [2024-08-14 06:49:45.522831] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:18.280 [2024-08-14 06:49:45.522888] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.280 [2024-08-14 06:49:45.523100] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.540 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:18.540 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:18.540 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:18.540 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:18.540 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:18.540 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:18.540 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:18.540 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:18.540 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:18.540 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:18.540 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.540 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.540 [2024-08-14 06:49:45.623081] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:18:18.540 [2024-08-14 06:49:45.623274] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:18.540 [2024-08-14 06:49:45.623708] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cae960 00:18:18.540 [2024-08-14 06:49:45.623974] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:18:18.540 [2024-08-14 06:49:45.624020] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:18:18.540 [2024-08-14 06:49:45.624253] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.540 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:18.540 "name": "raid_bdev1", 00:18:18.540 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:18.540 "strip_size_kb": 0, 00:18:18.540 "state": "online", 00:18:18.540 "raid_level": "raid1", 00:18:18.540 "superblock": true, 00:18:18.540 "num_base_bdevs": 2, 00:18:18.540 "num_base_bdevs_discovered": 2, 00:18:18.540 "num_base_bdevs_operational": 2, 00:18:18.540 "base_bdevs_list": [ 00:18:18.540 { 00:18:18.540 "name": "spare", 00:18:18.540 "uuid": "0f1f2e89-fb00-5b4e-bb81-588ee897312e", 00:18:18.540 "is_configured": true, 00:18:18.540 "data_offset": 2048, 00:18:18.540 "data_size": 63488 00:18:18.540 }, 00:18:18.540 { 00:18:18.540 "name": "BaseBdev2", 00:18:18.540 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:18.540 "is_configured": true, 00:18:18.540 "data_offset": 2048, 00:18:18.540 "data_size": 63488 00:18:18.540 } 00:18:18.540 ] 00:18:18.540 }' 00:18:18.540 06:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:18.540 06:49:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.109 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:19.109 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:19.109 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:19.109 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:19.109 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:19.109 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.109 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.368 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:19.368 "name": "raid_bdev1", 00:18:19.368 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:19.368 "strip_size_kb": 0, 00:18:19.368 "state": "online", 00:18:19.368 "raid_level": "raid1", 00:18:19.368 "superblock": true, 00:18:19.368 "num_base_bdevs": 2, 00:18:19.368 "num_base_bdevs_discovered": 2, 00:18:19.368 "num_base_bdevs_operational": 2, 00:18:19.368 "base_bdevs_list": [ 00:18:19.368 { 00:18:19.368 "name": "spare", 00:18:19.368 "uuid": "0f1f2e89-fb00-5b4e-bb81-588ee897312e", 00:18:19.368 "is_configured": true, 00:18:19.368 "data_offset": 2048, 00:18:19.368 "data_size": 63488 00:18:19.368 }, 00:18:19.368 { 00:18:19.368 "name": "BaseBdev2", 00:18:19.368 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:19.368 "is_configured": true, 00:18:19.369 "data_offset": 2048, 00:18:19.369 "data_size": 63488 00:18:19.369 } 00:18:19.369 ] 00:18:19.369 }' 00:18:19.369 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:19.628 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:19.628 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:19.628 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:19.628 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.628 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:19.888 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.888 06:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:19.888 [2024-08-14 06:49:47.066098] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.888 06:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.888 06:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:19.888 06:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:19.888 06:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:19.888 06:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:19.888 06:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:19.888 06:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:19.888 06:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:19.888 06:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:19.888 06:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:19.888 06:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.888 06:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.208 06:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.208 "name": "raid_bdev1", 00:18:20.208 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:20.208 "strip_size_kb": 0, 00:18:20.208 "state": "online", 00:18:20.208 "raid_level": "raid1", 00:18:20.208 "superblock": true, 00:18:20.208 "num_base_bdevs": 2, 00:18:20.208 "num_base_bdevs_discovered": 1, 00:18:20.208 "num_base_bdevs_operational": 1, 00:18:20.208 "base_bdevs_list": [ 00:18:20.208 { 00:18:20.208 "name": null, 00:18:20.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.208 "is_configured": false, 00:18:20.208 "data_offset": 2048, 00:18:20.208 "data_size": 63488 00:18:20.208 }, 00:18:20.208 { 00:18:20.208 "name": "BaseBdev2", 00:18:20.208 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:20.208 "is_configured": true, 00:18:20.208 "data_offset": 2048, 00:18:20.208 "data_size": 63488 00:18:20.208 } 00:18:20.208 ] 00:18:20.208 }' 00:18:20.208 06:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:20.208 06:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.778 06:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:20.778 [2024-08-14 06:49:48.016716] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.778 [2024-08-14 06:49:48.017030] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:20.778 [2024-08-14 06:49:48.017051] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:20.778 [2024-08-14 06:49:48.017104] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.778 [2024-08-14 06:49:48.024653] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caea30 00:18:20.778 [2024-08-14 06:49:48.027140] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:21.037 06:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:18:21.976 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.976 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:21.976 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:21.976 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:21.976 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:21.976 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.976 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.236 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:22.236 "name": "raid_bdev1", 00:18:22.236 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:22.236 "strip_size_kb": 0, 00:18:22.236 "state": "online", 00:18:22.236 "raid_level": "raid1", 00:18:22.236 "superblock": true, 00:18:22.236 "num_base_bdevs": 2, 00:18:22.236 "num_base_bdevs_discovered": 2, 00:18:22.236 "num_base_bdevs_operational": 2, 00:18:22.236 "process": { 00:18:22.236 "type": "rebuild", 00:18:22.236 "target": "spare", 00:18:22.236 "progress": { 00:18:22.236 "blocks": 24576, 00:18:22.236 "percent": 38 00:18:22.236 } 00:18:22.236 }, 00:18:22.236 "base_bdevs_list": [ 00:18:22.236 { 00:18:22.236 "name": "spare", 00:18:22.236 "uuid": "0f1f2e89-fb00-5b4e-bb81-588ee897312e", 00:18:22.236 "is_configured": true, 00:18:22.236 "data_offset": 2048, 00:18:22.236 "data_size": 63488 00:18:22.236 }, 00:18:22.236 { 00:18:22.236 "name": "BaseBdev2", 00:18:22.236 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:22.236 "is_configured": true, 00:18:22.236 "data_offset": 2048, 00:18:22.236 "data_size": 63488 00:18:22.236 } 00:18:22.236 ] 00:18:22.236 }' 00:18:22.236 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:22.236 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.236 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:22.236 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.236 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:22.495 [2024-08-14 06:49:49.608471] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.495 [2024-08-14 06:49:49.640321] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:22.495 [2024-08-14 06:49:49.640510] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.495 [2024-08-14 06:49:49.640550] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.495 [2024-08-14 06:49:49.640576] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:22.495 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:22.495 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:22.495 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:22.495 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:22.495 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:22.495 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:22.495 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.495 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.496 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.496 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.496 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.496 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.754 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:22.754 "name": "raid_bdev1", 00:18:22.754 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:22.754 "strip_size_kb": 0, 00:18:22.754 "state": "online", 00:18:22.754 "raid_level": "raid1", 00:18:22.754 "superblock": true, 00:18:22.754 "num_base_bdevs": 2, 00:18:22.754 "num_base_bdevs_discovered": 1, 00:18:22.754 "num_base_bdevs_operational": 1, 00:18:22.754 "base_bdevs_list": [ 00:18:22.754 { 00:18:22.754 "name": null, 00:18:22.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.754 "is_configured": false, 00:18:22.754 "data_offset": 2048, 00:18:22.754 "data_size": 63488 00:18:22.754 }, 00:18:22.754 { 00:18:22.754 "name": "BaseBdev2", 00:18:22.754 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:22.754 "is_configured": true, 00:18:22.754 "data_offset": 2048, 00:18:22.754 "data_size": 63488 00:18:22.754 } 00:18:22.754 ] 00:18:22.754 }' 00:18:22.754 06:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:22.754 06:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.327 06:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:23.586 [2024-08-14 06:49:50.638770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:23.586 [2024-08-14 06:49:50.638985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.586 [2024-08-14 06:49:50.639036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:23.586 [2024-08-14 06:49:50.639071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.586 [2024-08-14 06:49:50.639649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.586 [2024-08-14 06:49:50.639718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:23.586 [2024-08-14 06:49:50.639855] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:23.586 [2024-08-14 06:49:50.639879] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:23.586 [2024-08-14 06:49:50.639892] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:23.586 [2024-08-14 06:49:50.639923] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:23.586 spare 00:18:23.586 [2024-08-14 06:49:50.647247] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:18:23.586 [2024-08-14 06:49:50.649477] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:23.586 06:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:18:24.523 06:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.523 06:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:24.523 06:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:24.523 06:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:24.523 06:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:24.523 06:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.523 06:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.782 06:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:24.782 "name": "raid_bdev1", 00:18:24.782 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:24.782 "strip_size_kb": 0, 00:18:24.782 "state": "online", 00:18:24.782 "raid_level": "raid1", 00:18:24.782 "superblock": true, 00:18:24.782 "num_base_bdevs": 2, 00:18:24.782 "num_base_bdevs_discovered": 2, 00:18:24.782 "num_base_bdevs_operational": 2, 00:18:24.782 "process": { 00:18:24.782 "type": "rebuild", 00:18:24.782 "target": "spare", 00:18:24.782 "progress": { 00:18:24.782 "blocks": 24576, 00:18:24.782 "percent": 38 00:18:24.782 } 00:18:24.782 }, 00:18:24.782 "base_bdevs_list": [ 00:18:24.782 { 00:18:24.782 "name": "spare", 00:18:24.783 "uuid": "0f1f2e89-fb00-5b4e-bb81-588ee897312e", 00:18:24.783 "is_configured": true, 00:18:24.783 "data_offset": 2048, 00:18:24.783 "data_size": 63488 00:18:24.783 }, 00:18:24.783 { 00:18:24.783 "name": "BaseBdev2", 00:18:24.783 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:24.783 "is_configured": true, 00:18:24.783 "data_offset": 2048, 00:18:24.783 "data_size": 63488 00:18:24.783 } 00:18:24.783 ] 00:18:24.783 }' 00:18:24.783 06:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:24.783 06:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.783 06:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:24.783 06:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.783 06:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:25.042 [2024-08-14 06:49:52.174133] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:25.042 [2024-08-14 06:49:52.261986] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:25.042 [2024-08-14 06:49:52.262096] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.042 [2024-08-14 06:49:52.262120] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:25.042 [2024-08-14 06:49:52.262130] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:25.300 06:49:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:25.300 06:49:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:25.300 06:49:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:25.300 06:49:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:25.300 06:49:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:25.300 06:49:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:25.300 06:49:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.300 06:49:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.300 06:49:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.300 06:49:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.300 06:49:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.300 06:49:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.300 06:49:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.300 "name": "raid_bdev1", 00:18:25.300 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:25.300 "strip_size_kb": 0, 00:18:25.300 "state": "online", 00:18:25.300 "raid_level": "raid1", 00:18:25.300 "superblock": true, 00:18:25.300 "num_base_bdevs": 2, 00:18:25.300 "num_base_bdevs_discovered": 1, 00:18:25.300 "num_base_bdevs_operational": 1, 00:18:25.300 "base_bdevs_list": [ 00:18:25.300 { 00:18:25.300 "name": null, 00:18:25.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.300 "is_configured": false, 00:18:25.300 "data_offset": 2048, 00:18:25.300 "data_size": 63488 00:18:25.300 }, 00:18:25.300 { 00:18:25.300 "name": "BaseBdev2", 00:18:25.300 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:25.300 "is_configured": true, 00:18:25.300 "data_offset": 2048, 00:18:25.300 "data_size": 63488 00:18:25.300 } 00:18:25.300 ] 00:18:25.300 }' 00:18:25.300 06:49:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.300 06:49:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.868 06:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:25.868 06:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:25.868 06:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:25.868 06:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:25.868 06:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:25.868 06:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.868 06:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.127 06:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:26.127 "name": "raid_bdev1", 00:18:26.127 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:26.127 "strip_size_kb": 0, 00:18:26.127 "state": "online", 00:18:26.127 "raid_level": "raid1", 00:18:26.127 "superblock": true, 00:18:26.127 "num_base_bdevs": 2, 00:18:26.127 "num_base_bdevs_discovered": 1, 00:18:26.127 "num_base_bdevs_operational": 1, 00:18:26.127 "base_bdevs_list": [ 00:18:26.127 { 00:18:26.127 "name": null, 00:18:26.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.127 "is_configured": false, 00:18:26.127 "data_offset": 2048, 00:18:26.127 "data_size": 63488 00:18:26.127 }, 00:18:26.127 { 00:18:26.127 "name": "BaseBdev2", 00:18:26.127 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:26.127 "is_configured": true, 00:18:26.127 "data_offset": 2048, 00:18:26.127 "data_size": 63488 00:18:26.127 } 00:18:26.127 ] 00:18:26.127 }' 00:18:26.127 06:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:26.127 06:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:26.127 06:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:26.127 06:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:26.127 06:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:18:26.386 06:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:26.646 [2024-08-14 06:49:53.723994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:26.646 [2024-08-14 06:49:53.724237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.646 [2024-08-14 06:49:53.724278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:26.647 [2024-08-14 06:49:53.724289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.647 [2024-08-14 06:49:53.724796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.647 [2024-08-14 06:49:53.724815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:26.647 [2024-08-14 06:49:53.724924] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:26.647 [2024-08-14 06:49:53.724954] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:26.647 [2024-08-14 06:49:53.724968] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:26.647 BaseBdev1 00:18:26.647 06:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:18:27.588 06:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.588 06:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:27.588 06:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:27.588 06:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:27.588 06:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:27.588 06:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:27.588 06:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:27.588 06:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:27.588 06:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:27.588 06:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:27.588 06:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.588 06:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.847 06:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:27.847 "name": "raid_bdev1", 00:18:27.847 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:27.847 "strip_size_kb": 0, 00:18:27.847 "state": "online", 00:18:27.847 "raid_level": "raid1", 00:18:27.847 "superblock": true, 00:18:27.847 "num_base_bdevs": 2, 00:18:27.847 "num_base_bdevs_discovered": 1, 00:18:27.847 "num_base_bdevs_operational": 1, 00:18:27.847 "base_bdevs_list": [ 00:18:27.847 { 00:18:27.847 "name": null, 00:18:27.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.847 "is_configured": false, 00:18:27.847 "data_offset": 2048, 00:18:27.847 "data_size": 63488 00:18:27.847 }, 00:18:27.847 { 00:18:27.847 "name": "BaseBdev2", 00:18:27.847 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:27.847 "is_configured": true, 00:18:27.847 "data_offset": 2048, 00:18:27.847 "data_size": 63488 00:18:27.847 } 00:18:27.847 ] 00:18:27.847 }' 00:18:27.847 06:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:27.847 06:49:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.417 06:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:28.417 06:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:28.417 06:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:28.417 06:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:28.417 06:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:28.417 06:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.417 06:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:28.677 "name": "raid_bdev1", 00:18:28.677 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:28.677 "strip_size_kb": 0, 00:18:28.677 "state": "online", 00:18:28.677 "raid_level": "raid1", 00:18:28.677 "superblock": true, 00:18:28.677 "num_base_bdevs": 2, 00:18:28.677 "num_base_bdevs_discovered": 1, 00:18:28.677 "num_base_bdevs_operational": 1, 00:18:28.677 "base_bdevs_list": [ 00:18:28.677 { 00:18:28.677 "name": null, 00:18:28.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.677 "is_configured": false, 00:18:28.677 "data_offset": 2048, 00:18:28.677 "data_size": 63488 00:18:28.677 }, 00:18:28.677 { 00:18:28.677 "name": "BaseBdev2", 00:18:28.677 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:28.677 "is_configured": true, 00:18:28.677 "data_offset": 2048, 00:18:28.677 "data_size": 63488 00:18:28.677 } 00:18:28.677 ] 00:18:28.677 }' 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@646 -- # local es=0 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:28.677 06:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:28.937 [2024-08-14 06:49:55.972326] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.937 [2024-08-14 06:49:55.972591] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:28.937 [2024-08-14 06:49:55.972606] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:28.937 request: 00:18:28.937 { 00:18:28.937 "base_bdev": "BaseBdev1", 00:18:28.937 "raid_bdev": "raid_bdev1", 00:18:28.937 "method": "bdev_raid_add_base_bdev", 00:18:28.937 "req_id": 1 00:18:28.937 } 00:18:28.937 Got JSON-RPC error response 00:18:28.937 response: 00:18:28.937 { 00:18:28.937 "code": -22, 00:18:28.937 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:28.937 } 00:18:28.937 06:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@649 -- # es=1 00:18:28.937 06:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:18:28.937 06:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:18:28.937 06:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:18:28.937 06:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:18:29.876 06:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.876 06:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:29.876 06:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:29.876 06:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:29.876 06:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:29.876 06:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:29.876 06:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.876 06:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.876 06:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.876 06:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.876 06:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.876 06:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.135 06:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:30.135 "name": "raid_bdev1", 00:18:30.135 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:30.135 "strip_size_kb": 0, 00:18:30.135 "state": "online", 00:18:30.135 "raid_level": "raid1", 00:18:30.135 "superblock": true, 00:18:30.135 "num_base_bdevs": 2, 00:18:30.135 "num_base_bdevs_discovered": 1, 00:18:30.135 "num_base_bdevs_operational": 1, 00:18:30.135 "base_bdevs_list": [ 00:18:30.135 { 00:18:30.135 "name": null, 00:18:30.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.135 "is_configured": false, 00:18:30.135 "data_offset": 2048, 00:18:30.135 "data_size": 63488 00:18:30.135 }, 00:18:30.135 { 00:18:30.135 "name": "BaseBdev2", 00:18:30.135 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:30.135 "is_configured": true, 00:18:30.135 "data_offset": 2048, 00:18:30.135 "data_size": 63488 00:18:30.135 } 00:18:30.135 ] 00:18:30.135 }' 00:18:30.135 06:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:30.135 06:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.704 06:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:30.704 06:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:30.704 06:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:30.704 06:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:30.704 06:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:30.704 06:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.704 06:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.964 06:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:30.964 "name": "raid_bdev1", 00:18:30.964 "uuid": "427a4072-85fc-4f22-95c6-8e144f1fb3a6", 00:18:30.964 "strip_size_kb": 0, 00:18:30.964 "state": "online", 00:18:30.964 "raid_level": "raid1", 00:18:30.964 "superblock": true, 00:18:30.964 "num_base_bdevs": 2, 00:18:30.964 "num_base_bdevs_discovered": 1, 00:18:30.964 "num_base_bdevs_operational": 1, 00:18:30.964 "base_bdevs_list": [ 00:18:30.964 { 00:18:30.964 "name": null, 00:18:30.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.964 "is_configured": false, 00:18:30.964 "data_offset": 2048, 00:18:30.964 "data_size": 63488 00:18:30.964 }, 00:18:30.964 { 00:18:30.964 "name": "BaseBdev2", 00:18:30.964 "uuid": "a2e8c67c-8da1-55cd-b3bb-bd2b4b444ece", 00:18:30.964 "is_configured": true, 00:18:30.964 "data_offset": 2048, 00:18:30.964 "data_size": 63488 00:18:30.964 } 00:18:30.964 ] 00:18:30.964 }' 00:18:30.964 06:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:30.964 06:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:30.964 06:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:30.964 06:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:30.964 06:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 93002 00:18:30.964 06:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 93002 ']' 00:18:30.964 06:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 93002 00:18:30.964 06:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:18:30.964 06:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:30.964 06:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93002 00:18:30.964 06:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:30.964 06:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:30.964 06:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93002' 00:18:30.964 killing process with pid 93002 00:18:30.964 Received shutdown signal, test time was about 60.000000 seconds 00:18:30.964 00:18:30.964 Latency(us) 00:18:30.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.964 =================================================================================================================== 00:18:30.964 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:30.964 06:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 93002 00:18:30.964 [2024-08-14 06:49:58.106638] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:30.964 06:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 93002 00:18:30.964 [2024-08-14 06:49:58.106806] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.964 [2024-08-14 06:49:58.106875] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.964 [2024-08-14 06:49:58.106886] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:18:30.964 [2024-08-14 06:49:58.165439] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:31.536 ************************************ 00:18:31.536 END TEST raid_rebuild_test_sb 00:18:31.536 ************************************ 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:18:31.536 00:18:31.536 real 0m33.129s 00:18:31.536 user 0m47.913s 00:18:31.536 sys 0m5.270s 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.536 06:49:58 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:18:31.536 06:49:58 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:18:31.536 06:49:58 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:31.536 06:49:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:31.536 ************************************ 00:18:31.536 START TEST raid_rebuild_test_io 00:18:31.536 ************************************ 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 false true true 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=93856 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 93856 /var/tmp/spdk-raid.sock 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@827 -- # '[' -z 93856 ']' 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:31.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:31.536 06:49:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.536 [2024-08-14 06:49:58.704112] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:18:31.536 [2024-08-14 06:49:58.704319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93856 ] 00:18:31.536 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:31.536 Zero copy mechanism will not be used. 00:18:31.796 [2024-08-14 06:49:58.832534] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.796 [2024-08-14 06:49:58.910387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.796 [2024-08-14 06:49:58.988419] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.796 [2024-08-14 06:49:58.988463] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:32.365 06:49:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:32.365 06:49:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # return 0 00:18:32.365 06:49:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:18:32.365 06:49:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:32.625 BaseBdev1_malloc 00:18:32.625 06:49:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:32.884 [2024-08-14 06:49:59.949504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:32.884 [2024-08-14 06:49:59.949633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.884 [2024-08-14 06:49:59.949674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:18:32.885 [2024-08-14 06:49:59.949688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.885 [2024-08-14 06:49:59.952491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.885 BaseBdev1 00:18:32.885 [2024-08-14 06:49:59.952634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:32.885 06:49:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:18:32.885 06:49:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:33.144 BaseBdev2_malloc 00:18:33.144 06:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:33.144 [2024-08-14 06:50:00.368879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:33.144 [2024-08-14 06:50:00.369121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.144 [2024-08-14 06:50:00.369195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:33.144 [2024-08-14 06:50:00.369239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.144 [2024-08-14 06:50:00.372080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.144 [2024-08-14 06:50:00.372197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:33.144 BaseBdev2 00:18:33.144 06:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:33.443 spare_malloc 00:18:33.443 06:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:33.726 spare_delay 00:18:33.726 06:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:33.984 [2024-08-14 06:50:01.018510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:33.984 [2024-08-14 06:50:01.018742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.984 [2024-08-14 06:50:01.018802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:33.984 [2024-08-14 06:50:01.018866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.984 [2024-08-14 06:50:01.021862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.984 [2024-08-14 06:50:01.021955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:33.984 spare 00:18:33.984 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:33.984 [2024-08-14 06:50:01.222417] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.984 [2024-08-14 06:50:01.224848] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:33.984 [2024-08-14 06:50:01.225034] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:18:33.984 [2024-08-14 06:50:01.225077] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:33.984 [2024-08-14 06:50:01.225518] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:18:33.984 [2024-08-14 06:50:01.225790] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:18:33.984 [2024-08-14 06:50:01.225849] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:18:33.984 [2024-08-14 06:50:01.226097] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.242 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:34.242 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:34.242 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:34.242 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:34.242 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:34.242 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:34.242 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.242 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.242 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.242 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.242 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.242 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.242 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.242 "name": "raid_bdev1", 00:18:34.242 "uuid": "242e3578-7540-415d-aa9b-02927c1fca36", 00:18:34.242 "strip_size_kb": 0, 00:18:34.242 "state": "online", 00:18:34.242 "raid_level": "raid1", 00:18:34.242 "superblock": false, 00:18:34.242 "num_base_bdevs": 2, 00:18:34.242 "num_base_bdevs_discovered": 2, 00:18:34.242 "num_base_bdevs_operational": 2, 00:18:34.242 "base_bdevs_list": [ 00:18:34.242 { 00:18:34.242 "name": "BaseBdev1", 00:18:34.242 "uuid": "b8b9332f-db21-5875-adba-77961e3fd738", 00:18:34.242 "is_configured": true, 00:18:34.242 "data_offset": 0, 00:18:34.242 "data_size": 65536 00:18:34.242 }, 00:18:34.242 { 00:18:34.242 "name": "BaseBdev2", 00:18:34.242 "uuid": "6c77880b-3840-530f-8905-184eda46487f", 00:18:34.242 "is_configured": true, 00:18:34.242 "data_offset": 0, 00:18:34.242 "data_size": 65536 00:18:34.242 } 00:18:34.242 ] 00:18:34.242 }' 00:18:34.242 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.242 06:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.810 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:34.810 06:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:18:35.069 [2024-08-14 06:50:02.185118] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.070 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:18:35.070 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.070 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:35.330 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:18:35.330 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:18:35.330 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:35.330 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:35.330 [2024-08-14 06:50:02.515985] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:18:35.330 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:35.330 Zero copy mechanism will not be used. 00:18:35.330 Running I/O for 60 seconds... 00:18:35.589 [2024-08-14 06:50:02.615038] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:35.589 [2024-08-14 06:50:02.628635] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:18:35.589 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.589 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:35.589 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:35.589 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:35.589 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:35.589 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:35.589 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:35.589 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:35.589 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:35.589 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:35.589 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.589 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.849 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:35.849 "name": "raid_bdev1", 00:18:35.849 "uuid": "242e3578-7540-415d-aa9b-02927c1fca36", 00:18:35.849 "strip_size_kb": 0, 00:18:35.849 "state": "online", 00:18:35.849 "raid_level": "raid1", 00:18:35.849 "superblock": false, 00:18:35.849 "num_base_bdevs": 2, 00:18:35.849 "num_base_bdevs_discovered": 1, 00:18:35.849 "num_base_bdevs_operational": 1, 00:18:35.849 "base_bdevs_list": [ 00:18:35.849 { 00:18:35.849 "name": null, 00:18:35.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.849 "is_configured": false, 00:18:35.849 "data_offset": 0, 00:18:35.849 "data_size": 65536 00:18:35.849 }, 00:18:35.849 { 00:18:35.849 "name": "BaseBdev2", 00:18:35.849 "uuid": "6c77880b-3840-530f-8905-184eda46487f", 00:18:35.849 "is_configured": true, 00:18:35.849 "data_offset": 0, 00:18:35.849 "data_size": 65536 00:18:35.849 } 00:18:35.849 ] 00:18:35.849 }' 00:18:35.849 06:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:35.849 06:50:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.418 06:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:36.418 [2024-08-14 06:50:03.591156] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:36.418 06:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:36.418 [2024-08-14 06:50:03.655347] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:18:36.418 [2024-08-14 06:50:03.658144] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:36.675 [2024-08-14 06:50:03.784328] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:36.675 [2024-08-14 06:50:03.785224] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:36.675 [2024-08-14 06:50:03.910305] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:36.675 [2024-08-14 06:50:03.910853] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:37.243 [2024-08-14 06:50:04.249083] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:37.243 [2024-08-14 06:50:04.249913] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:37.243 [2024-08-14 06:50:04.481642] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:37.243 [2024-08-14 06:50:04.482198] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:37.502 06:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.502 06:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:37.502 06:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:37.502 06:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:37.502 06:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:37.502 06:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.502 06:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.760 [2024-08-14 06:50:04.806440] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:37.760 [2024-08-14 06:50:04.807522] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:37.760 06:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:37.760 "name": "raid_bdev1", 00:18:37.760 "uuid": "242e3578-7540-415d-aa9b-02927c1fca36", 00:18:37.760 "strip_size_kb": 0, 00:18:37.760 "state": "online", 00:18:37.760 "raid_level": "raid1", 00:18:37.760 "superblock": false, 00:18:37.760 "num_base_bdevs": 2, 00:18:37.760 "num_base_bdevs_discovered": 2, 00:18:37.760 "num_base_bdevs_operational": 2, 00:18:37.760 "process": { 00:18:37.760 "type": "rebuild", 00:18:37.760 "target": "spare", 00:18:37.760 "progress": { 00:18:37.760 "blocks": 14336, 00:18:37.760 "percent": 21 00:18:37.760 } 00:18:37.760 }, 00:18:37.760 "base_bdevs_list": [ 00:18:37.760 { 00:18:37.760 "name": "spare", 00:18:37.761 "uuid": "923ea7ef-0e1d-58d7-bf48-66ff2b2d5b5a", 00:18:37.761 "is_configured": true, 00:18:37.761 "data_offset": 0, 00:18:37.761 "data_size": 65536 00:18:37.761 }, 00:18:37.761 { 00:18:37.761 "name": "BaseBdev2", 00:18:37.761 "uuid": "6c77880b-3840-530f-8905-184eda46487f", 00:18:37.761 "is_configured": true, 00:18:37.761 "data_offset": 0, 00:18:37.761 "data_size": 65536 00:18:37.761 } 00:18:37.761 ] 00:18:37.761 }' 00:18:37.761 06:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:37.761 06:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.761 06:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:37.761 06:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.761 06:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:38.020 [2024-08-14 06:50:05.033478] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:38.020 [2024-08-14 06:50:05.034123] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:38.020 [2024-08-14 06:50:05.129496] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:38.020 [2024-08-14 06:50:05.144830] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:38.020 [2024-08-14 06:50:05.158681] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:38.020 [2024-08-14 06:50:05.167753] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.020 [2024-08-14 06:50:05.167848] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:38.020 [2024-08-14 06:50:05.167896] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:38.020 [2024-08-14 06:50:05.178559] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:18:38.020 06:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:38.020 06:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:38.020 06:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:38.020 06:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:38.020 06:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:38.020 06:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:38.020 06:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:38.020 06:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:38.020 06:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:38.020 06:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:38.020 06:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.020 06:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.280 06:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:38.280 "name": "raid_bdev1", 00:18:38.280 "uuid": "242e3578-7540-415d-aa9b-02927c1fca36", 00:18:38.280 "strip_size_kb": 0, 00:18:38.280 "state": "online", 00:18:38.280 "raid_level": "raid1", 00:18:38.280 "superblock": false, 00:18:38.280 "num_base_bdevs": 2, 00:18:38.280 "num_base_bdevs_discovered": 1, 00:18:38.280 "num_base_bdevs_operational": 1, 00:18:38.280 "base_bdevs_list": [ 00:18:38.280 { 00:18:38.280 "name": null, 00:18:38.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.280 "is_configured": false, 00:18:38.280 "data_offset": 0, 00:18:38.280 "data_size": 65536 00:18:38.280 }, 00:18:38.280 { 00:18:38.280 "name": "BaseBdev2", 00:18:38.280 "uuid": "6c77880b-3840-530f-8905-184eda46487f", 00:18:38.280 "is_configured": true, 00:18:38.280 "data_offset": 0, 00:18:38.280 "data_size": 65536 00:18:38.280 } 00:18:38.280 ] 00:18:38.280 }' 00:18:38.280 06:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:38.280 06:50:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:38.848 06:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.848 06:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:38.848 06:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:38.848 06:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:38.848 06:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:38.848 06:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.848 06:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.108 06:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:39.108 "name": "raid_bdev1", 00:18:39.108 "uuid": "242e3578-7540-415d-aa9b-02927c1fca36", 00:18:39.108 "strip_size_kb": 0, 00:18:39.108 "state": "online", 00:18:39.108 "raid_level": "raid1", 00:18:39.108 "superblock": false, 00:18:39.108 "num_base_bdevs": 2, 00:18:39.108 "num_base_bdevs_discovered": 1, 00:18:39.108 "num_base_bdevs_operational": 1, 00:18:39.108 "base_bdevs_list": [ 00:18:39.108 { 00:18:39.108 "name": null, 00:18:39.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.108 "is_configured": false, 00:18:39.108 "data_offset": 0, 00:18:39.108 "data_size": 65536 00:18:39.108 }, 00:18:39.108 { 00:18:39.108 "name": "BaseBdev2", 00:18:39.108 "uuid": "6c77880b-3840-530f-8905-184eda46487f", 00:18:39.108 "is_configured": true, 00:18:39.108 "data_offset": 0, 00:18:39.108 "data_size": 65536 00:18:39.108 } 00:18:39.108 ] 00:18:39.108 }' 00:18:39.108 06:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:39.108 06:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:39.108 06:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:39.108 06:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:39.108 06:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:39.367 [2024-08-14 06:50:06.549407] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.367 06:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:18:39.367 [2024-08-14 06:50:06.604784] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:18:39.367 [2024-08-14 06:50:06.607433] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:39.626 [2024-08-14 06:50:06.723691] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:39.626 [2024-08-14 06:50:06.724693] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:39.885 [2024-08-14 06:50:06.956964] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:39.885 [2024-08-14 06:50:06.957615] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:40.143 [2024-08-14 06:50:07.306045] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:40.143 [2024-08-14 06:50:07.307087] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:40.411 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.411 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:40.411 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:40.411 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:40.411 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:40.411 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.411 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.680 [2024-08-14 06:50:07.791309] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:40.680 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:40.680 "name": "raid_bdev1", 00:18:40.680 "uuid": "242e3578-7540-415d-aa9b-02927c1fca36", 00:18:40.680 "strip_size_kb": 0, 00:18:40.680 "state": "online", 00:18:40.680 "raid_level": "raid1", 00:18:40.680 "superblock": false, 00:18:40.680 "num_base_bdevs": 2, 00:18:40.680 "num_base_bdevs_discovered": 2, 00:18:40.680 "num_base_bdevs_operational": 2, 00:18:40.680 "process": { 00:18:40.680 "type": "rebuild", 00:18:40.680 "target": "spare", 00:18:40.680 "progress": { 00:18:40.680 "blocks": 16384, 00:18:40.680 "percent": 25 00:18:40.680 } 00:18:40.680 }, 00:18:40.680 "base_bdevs_list": [ 00:18:40.680 { 00:18:40.680 "name": "spare", 00:18:40.680 "uuid": "923ea7ef-0e1d-58d7-bf48-66ff2b2d5b5a", 00:18:40.680 "is_configured": true, 00:18:40.680 "data_offset": 0, 00:18:40.680 "data_size": 65536 00:18:40.680 }, 00:18:40.680 { 00:18:40.680 "name": "BaseBdev2", 00:18:40.680 "uuid": "6c77880b-3840-530f-8905-184eda46487f", 00:18:40.680 "is_configured": true, 00:18:40.680 "data_offset": 0, 00:18:40.680 "data_size": 65536 00:18:40.680 } 00:18:40.680 ] 00:18:40.680 }' 00:18:40.680 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:40.680 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.680 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:40.680 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.940 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:18:40.940 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:18:40.940 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:18:40.940 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:18:40.940 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=766 00:18:40.940 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:18:40.940 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.940 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:40.940 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:40.940 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:40.940 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:40.940 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.940 06:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.940 [2024-08-14 06:50:08.058026] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:40.940 06:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:40.940 "name": "raid_bdev1", 00:18:40.940 "uuid": "242e3578-7540-415d-aa9b-02927c1fca36", 00:18:40.940 "strip_size_kb": 0, 00:18:40.940 "state": "online", 00:18:40.940 "raid_level": "raid1", 00:18:40.940 "superblock": false, 00:18:40.940 "num_base_bdevs": 2, 00:18:40.940 "num_base_bdevs_discovered": 2, 00:18:40.940 "num_base_bdevs_operational": 2, 00:18:40.940 "process": { 00:18:40.940 "type": "rebuild", 00:18:40.940 "target": "spare", 00:18:40.940 "progress": { 00:18:40.940 "blocks": 20480, 00:18:40.940 "percent": 31 00:18:40.940 } 00:18:40.940 }, 00:18:40.940 "base_bdevs_list": [ 00:18:40.940 { 00:18:40.940 "name": "spare", 00:18:40.940 "uuid": "923ea7ef-0e1d-58d7-bf48-66ff2b2d5b5a", 00:18:40.940 "is_configured": true, 00:18:40.940 "data_offset": 0, 00:18:40.940 "data_size": 65536 00:18:40.940 }, 00:18:40.940 { 00:18:40.940 "name": "BaseBdev2", 00:18:40.940 "uuid": "6c77880b-3840-530f-8905-184eda46487f", 00:18:40.940 "is_configured": true, 00:18:40.940 "data_offset": 0, 00:18:40.940 "data_size": 65536 00:18:40.940 } 00:18:40.940 ] 00:18:40.940 }' 00:18:40.940 06:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:40.940 06:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.940 06:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:40.940 [2024-08-14 06:50:08.175333] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:41.199 06:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.199 06:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:18:41.768 [2024-08-14 06:50:08.964089] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:42.026 06:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:18:42.026 06:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.026 06:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:42.026 06:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:42.026 06:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:42.026 06:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:42.026 06:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.026 06:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.286 [2024-08-14 06:50:09.297300] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:42.286 06:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:42.286 "name": "raid_bdev1", 00:18:42.286 "uuid": "242e3578-7540-415d-aa9b-02927c1fca36", 00:18:42.286 "strip_size_kb": 0, 00:18:42.286 "state": "online", 00:18:42.286 "raid_level": "raid1", 00:18:42.286 "superblock": false, 00:18:42.286 "num_base_bdevs": 2, 00:18:42.286 "num_base_bdevs_discovered": 2, 00:18:42.286 "num_base_bdevs_operational": 2, 00:18:42.286 "process": { 00:18:42.286 "type": "rebuild", 00:18:42.286 "target": "spare", 00:18:42.286 "progress": { 00:18:42.286 "blocks": 38912, 00:18:42.286 "percent": 59 00:18:42.286 } 00:18:42.286 }, 00:18:42.286 "base_bdevs_list": [ 00:18:42.286 { 00:18:42.286 "name": "spare", 00:18:42.286 "uuid": "923ea7ef-0e1d-58d7-bf48-66ff2b2d5b5a", 00:18:42.286 "is_configured": true, 00:18:42.286 "data_offset": 0, 00:18:42.286 "data_size": 65536 00:18:42.286 }, 00:18:42.286 { 00:18:42.286 "name": "BaseBdev2", 00:18:42.286 "uuid": "6c77880b-3840-530f-8905-184eda46487f", 00:18:42.286 "is_configured": true, 00:18:42.286 "data_offset": 0, 00:18:42.286 "data_size": 65536 00:18:42.286 } 00:18:42.286 ] 00:18:42.286 }' 00:18:42.286 06:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:42.286 06:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.286 06:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:42.286 06:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.286 06:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:18:42.286 [2024-08-14 06:50:09.515857] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:42.855 [2024-08-14 06:50:09.846668] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:18:42.855 [2024-08-14 06:50:10.055847] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:18:42.855 [2024-08-14 06:50:10.056406] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:18:43.425 06:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:18:43.425 06:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.425 06:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:43.425 06:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:43.425 06:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:43.425 06:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:43.425 06:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.425 06:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.685 [2024-08-14 06:50:10.690606] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:18:43.685 06:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:43.685 "name": "raid_bdev1", 00:18:43.685 "uuid": "242e3578-7540-415d-aa9b-02927c1fca36", 00:18:43.685 "strip_size_kb": 0, 00:18:43.685 "state": "online", 00:18:43.685 "raid_level": "raid1", 00:18:43.685 "superblock": false, 00:18:43.685 "num_base_bdevs": 2, 00:18:43.685 "num_base_bdevs_discovered": 2, 00:18:43.685 "num_base_bdevs_operational": 2, 00:18:43.685 "process": { 00:18:43.685 "type": "rebuild", 00:18:43.685 "target": "spare", 00:18:43.685 "progress": { 00:18:43.685 "blocks": 57344, 00:18:43.685 "percent": 87 00:18:43.685 } 00:18:43.685 }, 00:18:43.685 "base_bdevs_list": [ 00:18:43.685 { 00:18:43.685 "name": "spare", 00:18:43.685 "uuid": "923ea7ef-0e1d-58d7-bf48-66ff2b2d5b5a", 00:18:43.685 "is_configured": true, 00:18:43.685 "data_offset": 0, 00:18:43.685 "data_size": 65536 00:18:43.685 }, 00:18:43.685 { 00:18:43.685 "name": "BaseBdev2", 00:18:43.685 "uuid": "6c77880b-3840-530f-8905-184eda46487f", 00:18:43.685 "is_configured": true, 00:18:43.685 "data_offset": 0, 00:18:43.685 "data_size": 65536 00:18:43.685 } 00:18:43.685 ] 00:18:43.685 }' 00:18:43.685 06:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:43.685 06:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.685 06:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:43.685 06:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.686 06:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:18:44.254 [2024-08-14 06:50:11.234047] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:44.254 [2024-08-14 06:50:11.333831] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:44.254 [2024-08-14 06:50:11.337425] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.831 06:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:18:44.831 06:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.831 06:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:44.831 06:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:44.831 06:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:44.831 06:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:44.831 06:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.831 06:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.831 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:44.831 "name": "raid_bdev1", 00:18:44.831 "uuid": "242e3578-7540-415d-aa9b-02927c1fca36", 00:18:44.831 "strip_size_kb": 0, 00:18:44.831 "state": "online", 00:18:44.831 "raid_level": "raid1", 00:18:44.831 "superblock": false, 00:18:44.831 "num_base_bdevs": 2, 00:18:44.831 "num_base_bdevs_discovered": 2, 00:18:44.831 "num_base_bdevs_operational": 2, 00:18:44.831 "base_bdevs_list": [ 00:18:44.831 { 00:18:44.831 "name": "spare", 00:18:44.831 "uuid": "923ea7ef-0e1d-58d7-bf48-66ff2b2d5b5a", 00:18:44.831 "is_configured": true, 00:18:44.831 "data_offset": 0, 00:18:44.831 "data_size": 65536 00:18:44.831 }, 00:18:44.831 { 00:18:44.831 "name": "BaseBdev2", 00:18:44.831 "uuid": "6c77880b-3840-530f-8905-184eda46487f", 00:18:44.831 "is_configured": true, 00:18:44.831 "data_offset": 0, 00:18:44.831 "data_size": 65536 00:18:44.831 } 00:18:44.831 ] 00:18:44.831 }' 00:18:44.831 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:45.112 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:45.112 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:45.112 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:18:45.112 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:18:45.112 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.112 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:45.112 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:45.112 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:45.112 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:45.112 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.112 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:45.370 "name": "raid_bdev1", 00:18:45.370 "uuid": "242e3578-7540-415d-aa9b-02927c1fca36", 00:18:45.370 "strip_size_kb": 0, 00:18:45.370 "state": "online", 00:18:45.370 "raid_level": "raid1", 00:18:45.370 "superblock": false, 00:18:45.370 "num_base_bdevs": 2, 00:18:45.370 "num_base_bdevs_discovered": 2, 00:18:45.370 "num_base_bdevs_operational": 2, 00:18:45.370 "base_bdevs_list": [ 00:18:45.370 { 00:18:45.370 "name": "spare", 00:18:45.370 "uuid": "923ea7ef-0e1d-58d7-bf48-66ff2b2d5b5a", 00:18:45.370 "is_configured": true, 00:18:45.370 "data_offset": 0, 00:18:45.370 "data_size": 65536 00:18:45.370 }, 00:18:45.370 { 00:18:45.370 "name": "BaseBdev2", 00:18:45.370 "uuid": "6c77880b-3840-530f-8905-184eda46487f", 00:18:45.370 "is_configured": true, 00:18:45.370 "data_offset": 0, 00:18:45.370 "data_size": 65536 00:18:45.370 } 00:18:45.370 ] 00:18:45.370 }' 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.370 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.628 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.628 "name": "raid_bdev1", 00:18:45.628 "uuid": "242e3578-7540-415d-aa9b-02927c1fca36", 00:18:45.628 "strip_size_kb": 0, 00:18:45.628 "state": "online", 00:18:45.628 "raid_level": "raid1", 00:18:45.628 "superblock": false, 00:18:45.628 "num_base_bdevs": 2, 00:18:45.628 "num_base_bdevs_discovered": 2, 00:18:45.628 "num_base_bdevs_operational": 2, 00:18:45.628 "base_bdevs_list": [ 00:18:45.628 { 00:18:45.628 "name": "spare", 00:18:45.628 "uuid": "923ea7ef-0e1d-58d7-bf48-66ff2b2d5b5a", 00:18:45.628 "is_configured": true, 00:18:45.628 "data_offset": 0, 00:18:45.628 "data_size": 65536 00:18:45.628 }, 00:18:45.628 { 00:18:45.628 "name": "BaseBdev2", 00:18:45.628 "uuid": "6c77880b-3840-530f-8905-184eda46487f", 00:18:45.628 "is_configured": true, 00:18:45.628 "data_offset": 0, 00:18:45.628 "data_size": 65536 00:18:45.628 } 00:18:45.628 ] 00:18:45.628 }' 00:18:45.628 06:50:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.628 06:50:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:46.195 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:46.454 [2024-08-14 06:50:13.558418] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.454 [2024-08-14 06:50:13.558477] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.454 00:18:46.454 Latency(us) 00:18:46.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.454 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:46.454 raid_bdev1 : 11.13 102.62 307.85 0.00 0.00 13815.68 313.01 116762.83 00:18:46.454 =================================================================================================================== 00:18:46.454 Total : 102.62 307.85 0.00 0.00 13815.68 313.01 116762.83 00:18:46.454 0 00:18:46.454 [2024-08-14 06:50:13.631634] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.454 [2024-08-14 06:50:13.631701] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.454 [2024-08-14 06:50:13.631818] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.454 [2024-08-14 06:50:13.631835] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:18:46.454 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:18:46.454 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.712 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:18:46.712 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:18:46.712 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:18:46.712 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:18:46.712 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:46.712 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:46.712 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:46.712 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:46.712 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:46.712 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:46.712 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:46.712 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.712 06:50:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:18:46.971 /dev/nbd0 00:18:46.971 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:46.971 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:46.971 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:18:46.971 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:18:46.971 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:46.971 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:46.971 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:18:46.971 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.972 1+0 records in 00:18:46.972 1+0 records out 00:18:46.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343009 s, 11.9 MB/s 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.972 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:47.230 /dev/nbd1 00:18:47.230 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:47.230 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:47.230 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:18:47.230 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:18:47.230 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:47.230 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:47.230 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:18:47.230 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:18:47.230 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:47.230 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:47.230 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:47.230 1+0 records in 00:18:47.230 1+0 records out 00:18:47.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524061 s, 7.8 MB/s 00:18:47.230 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.230 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:18:47.231 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.231 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:47.231 06:50:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:18:47.231 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:47.231 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:47.231 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:47.489 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:18:47.489 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:47.489 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:47.489 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:47.489 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:47.489 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:47.489 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:47.747 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:47.747 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:47.747 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:47.747 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:47.747 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.747 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:47.747 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:47.747 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:47.747 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:47.747 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:47.747 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:47.747 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:47.747 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:47.747 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:47.747 06:50:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:48.006 06:50:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:48.006 06:50:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:48.006 06:50:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:48.006 06:50:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:48.006 06:50:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:48.006 06:50:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:48.006 06:50:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:48.006 06:50:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:48.006 06:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:18:48.006 06:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 93856 00:18:48.006 06:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@946 -- # '[' -z 93856 ']' 00:18:48.006 06:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # kill -0 93856 00:18:48.006 06:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # uname 00:18:48.006 06:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:48.007 06:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93856 00:18:48.007 killing process with pid 93856 00:18:48.007 Received shutdown signal, test time was about 12.575462 seconds 00:18:48.007 00:18:48.007 Latency(us) 00:18:48.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.007 =================================================================================================================== 00:18:48.007 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.007 06:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:48.007 06:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:48.007 06:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93856' 00:18:48.007 06:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@965 -- # kill 93856 00:18:48.007 [2024-08-14 06:50:15.070142] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:48.007 06:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # wait 93856 00:18:48.007 [2024-08-14 06:50:15.097199] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:18:48.266 00:18:48.266 real 0m16.723s 00:18:48.266 user 0m25.391s 00:18:48.266 sys 0m2.278s 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:48.266 ************************************ 00:18:48.266 END TEST raid_rebuild_test_io 00:18:48.266 ************************************ 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.266 06:50:15 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:18:48.266 06:50:15 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:18:48.266 06:50:15 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:48.266 06:50:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.266 ************************************ 00:18:48.266 START TEST raid_rebuild_test_sb_io 00:18:48.266 ************************************ 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true true true 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=94285 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 94285 /var/tmp/spdk-raid.sock 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@827 -- # '[' -z 94285 ']' 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:48.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:48.266 06:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.266 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:48.266 Zero copy mechanism will not be used. 00:18:48.266 [2024-08-14 06:50:15.504696] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:18:48.266 [2024-08-14 06:50:15.504840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94285 ] 00:18:48.525 [2024-08-14 06:50:15.651302] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.525 [2024-08-14 06:50:15.705720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.525 [2024-08-14 06:50:15.751055] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:48.525 [2024-08-14 06:50:15.751098] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.462 06:50:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:49.462 06:50:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # return 0 00:18:49.462 06:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:18:49.462 06:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:49.462 BaseBdev1_malloc 00:18:49.462 06:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:49.721 [2024-08-14 06:50:16.888430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:49.721 [2024-08-14 06:50:16.888538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.721 [2024-08-14 06:50:16.888570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:18:49.721 [2024-08-14 06:50:16.888586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.721 [2024-08-14 06:50:16.891138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.721 [2024-08-14 06:50:16.891212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:49.721 BaseBdev1 00:18:49.721 06:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:18:49.721 06:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:49.980 BaseBdev2_malloc 00:18:49.980 06:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:50.239 [2024-08-14 06:50:17.412919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:50.239 [2024-08-14 06:50:17.413104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.239 [2024-08-14 06:50:17.413136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:50.239 [2024-08-14 06:50:17.413148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.239 [2024-08-14 06:50:17.415672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.239 [2024-08-14 06:50:17.415724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:50.239 BaseBdev2 00:18:50.239 06:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:50.498 spare_malloc 00:18:50.498 06:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:50.757 spare_delay 00:18:50.757 06:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:51.017 [2024-08-14 06:50:18.150307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:51.017 [2024-08-14 06:50:18.150466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.017 [2024-08-14 06:50:18.150547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:51.017 [2024-08-14 06:50:18.150587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.017 [2024-08-14 06:50:18.153059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.017 [2024-08-14 06:50:18.153146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:51.017 spare 00:18:51.017 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:51.277 [2024-08-14 06:50:18.385974] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:51.277 [2024-08-14 06:50:18.388234] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.277 [2024-08-14 06:50:18.388534] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:18:51.277 [2024-08-14 06:50:18.388599] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:51.277 [2024-08-14 06:50:18.388987] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:18:51.277 [2024-08-14 06:50:18.389246] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:18:51.277 [2024-08-14 06:50:18.389302] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:18:51.277 [2024-08-14 06:50:18.389613] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.277 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:51.277 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:51.277 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:51.277 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:51.277 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:51.277 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:51.277 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:51.277 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:51.277 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:51.277 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:51.277 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.277 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.537 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:51.537 "name": "raid_bdev1", 00:18:51.537 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:18:51.537 "strip_size_kb": 0, 00:18:51.537 "state": "online", 00:18:51.537 "raid_level": "raid1", 00:18:51.537 "superblock": true, 00:18:51.537 "num_base_bdevs": 2, 00:18:51.537 "num_base_bdevs_discovered": 2, 00:18:51.537 "num_base_bdevs_operational": 2, 00:18:51.537 "base_bdevs_list": [ 00:18:51.537 { 00:18:51.537 "name": "BaseBdev1", 00:18:51.537 "uuid": "78cbd195-fb49-5735-9dba-12714d6dc3fe", 00:18:51.537 "is_configured": true, 00:18:51.537 "data_offset": 2048, 00:18:51.537 "data_size": 63488 00:18:51.537 }, 00:18:51.537 { 00:18:51.537 "name": "BaseBdev2", 00:18:51.537 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:18:51.537 "is_configured": true, 00:18:51.537 "data_offset": 2048, 00:18:51.537 "data_size": 63488 00:18:51.537 } 00:18:51.537 ] 00:18:51.537 }' 00:18:51.537 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:51.537 06:50:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:52.109 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:52.109 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:18:52.368 [2024-08-14 06:50:19.464380] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:52.368 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:18:52.368 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.368 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:52.626 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:18:52.626 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:18:52.626 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:52.626 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:52.626 [2024-08-14 06:50:19.821566] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:18:52.626 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:52.626 Zero copy mechanism will not be used. 00:18:52.626 Running I/O for 60 seconds... 00:18:52.885 [2024-08-14 06:50:19.920008] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:52.885 [2024-08-14 06:50:19.920379] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:18:52.885 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:52.885 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:52.885 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:52.885 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:52.885 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:52.885 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:52.885 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:52.885 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:52.885 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:52.885 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:52.885 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.885 06:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.143 06:50:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.143 "name": "raid_bdev1", 00:18:53.143 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:18:53.143 "strip_size_kb": 0, 00:18:53.143 "state": "online", 00:18:53.143 "raid_level": "raid1", 00:18:53.143 "superblock": true, 00:18:53.143 "num_base_bdevs": 2, 00:18:53.143 "num_base_bdevs_discovered": 1, 00:18:53.143 "num_base_bdevs_operational": 1, 00:18:53.143 "base_bdevs_list": [ 00:18:53.143 { 00:18:53.143 "name": null, 00:18:53.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.143 "is_configured": false, 00:18:53.144 "data_offset": 2048, 00:18:53.144 "data_size": 63488 00:18:53.144 }, 00:18:53.144 { 00:18:53.144 "name": "BaseBdev2", 00:18:53.144 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:18:53.144 "is_configured": true, 00:18:53.144 "data_offset": 2048, 00:18:53.144 "data_size": 63488 00:18:53.144 } 00:18:53.144 ] 00:18:53.144 }' 00:18:53.144 06:50:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.144 06:50:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:53.710 06:50:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:53.969 [2024-08-14 06:50:21.003347] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:53.969 [2024-08-14 06:50:21.050017] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:18:53.969 06:50:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:53.969 [2024-08-14 06:50:21.051967] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:53.969 [2024-08-14 06:50:21.160919] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:53.969 [2024-08-14 06:50:21.161537] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:54.228 [2024-08-14 06:50:21.375237] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:54.228 [2024-08-14 06:50:21.375573] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:54.487 [2024-08-14 06:50:21.722900] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:54.487 [2024-08-14 06:50:21.723474] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:54.746 [2024-08-14 06:50:21.940144] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:55.005 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.005 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:55.005 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:55.005 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:55.005 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:55.005 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.005 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.264 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:55.264 "name": "raid_bdev1", 00:18:55.264 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:18:55.264 "strip_size_kb": 0, 00:18:55.264 "state": "online", 00:18:55.264 "raid_level": "raid1", 00:18:55.264 "superblock": true, 00:18:55.264 "num_base_bdevs": 2, 00:18:55.264 "num_base_bdevs_discovered": 2, 00:18:55.264 "num_base_bdevs_operational": 2, 00:18:55.264 "process": { 00:18:55.264 "type": "rebuild", 00:18:55.264 "target": "spare", 00:18:55.264 "progress": { 00:18:55.264 "blocks": 16384, 00:18:55.264 "percent": 25 00:18:55.264 } 00:18:55.264 }, 00:18:55.264 "base_bdevs_list": [ 00:18:55.264 { 00:18:55.264 "name": "spare", 00:18:55.264 "uuid": "a852eef1-2067-5419-879f-08ba4fa80681", 00:18:55.264 "is_configured": true, 00:18:55.264 "data_offset": 2048, 00:18:55.264 "data_size": 63488 00:18:55.264 }, 00:18:55.264 { 00:18:55.264 "name": "BaseBdev2", 00:18:55.264 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:18:55.264 "is_configured": true, 00:18:55.264 "data_offset": 2048, 00:18:55.264 "data_size": 63488 00:18:55.264 } 00:18:55.264 ] 00:18:55.264 }' 00:18:55.264 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:55.264 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.264 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:55.264 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.264 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:55.264 [2024-08-14 06:50:22.499481] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:55.523 [2024-08-14 06:50:22.606695] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:55.523 [2024-08-14 06:50:22.712234] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:55.523 [2024-08-14 06:50:22.720505] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.523 [2024-08-14 06:50:22.720623] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:55.523 [2024-08-14 06:50:22.720653] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:55.523 [2024-08-14 06:50:22.732255] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:18:55.523 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:55.523 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:55.523 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:55.523 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:55.523 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:55.523 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:55.523 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:55.523 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:55.523 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:55.523 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:55.782 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.782 06:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.782 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:55.782 "name": "raid_bdev1", 00:18:55.782 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:18:55.783 "strip_size_kb": 0, 00:18:55.783 "state": "online", 00:18:55.783 "raid_level": "raid1", 00:18:55.783 "superblock": true, 00:18:55.783 "num_base_bdevs": 2, 00:18:55.783 "num_base_bdevs_discovered": 1, 00:18:55.783 "num_base_bdevs_operational": 1, 00:18:55.783 "base_bdevs_list": [ 00:18:55.783 { 00:18:55.783 "name": null, 00:18:55.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.783 "is_configured": false, 00:18:55.783 "data_offset": 2048, 00:18:55.783 "data_size": 63488 00:18:55.783 }, 00:18:55.783 { 00:18:55.783 "name": "BaseBdev2", 00:18:55.783 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:18:55.783 "is_configured": true, 00:18:55.783 "data_offset": 2048, 00:18:55.783 "data_size": 63488 00:18:55.783 } 00:18:55.783 ] 00:18:55.783 }' 00:18:55.783 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:55.783 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:56.352 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:56.352 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:56.352 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:56.352 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:56.352 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:56.352 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.352 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.611 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:56.611 "name": "raid_bdev1", 00:18:56.611 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:18:56.611 "strip_size_kb": 0, 00:18:56.611 "state": "online", 00:18:56.611 "raid_level": "raid1", 00:18:56.611 "superblock": true, 00:18:56.611 "num_base_bdevs": 2, 00:18:56.611 "num_base_bdevs_discovered": 1, 00:18:56.611 "num_base_bdevs_operational": 1, 00:18:56.611 "base_bdevs_list": [ 00:18:56.611 { 00:18:56.611 "name": null, 00:18:56.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.611 "is_configured": false, 00:18:56.611 "data_offset": 2048, 00:18:56.611 "data_size": 63488 00:18:56.611 }, 00:18:56.611 { 00:18:56.611 "name": "BaseBdev2", 00:18:56.611 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:18:56.611 "is_configured": true, 00:18:56.611 "data_offset": 2048, 00:18:56.611 "data_size": 63488 00:18:56.611 } 00:18:56.612 ] 00:18:56.612 }' 00:18:56.612 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:56.871 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:56.871 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:56.871 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:56.871 06:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:57.129 [2024-08-14 06:50:24.193966] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:57.129 06:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:18:57.129 [2024-08-14 06:50:24.268797] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:18:57.129 [2024-08-14 06:50:24.271073] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:57.394 [2024-08-14 06:50:24.401730] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:57.394 [2024-08-14 06:50:24.402348] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:57.394 [2024-08-14 06:50:24.635692] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:57.394 [2024-08-14 06:50:24.636128] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:57.971 [2024-08-14 06:50:24.981116] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:57.971 [2024-08-14 06:50:25.091971] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:57.971 [2024-08-14 06:50:25.092436] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:58.230 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.230 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:58.230 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:58.230 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:58.230 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:58.230 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.230 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.230 [2024-08-14 06:50:25.342986] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:58.489 "name": "raid_bdev1", 00:18:58.489 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:18:58.489 "strip_size_kb": 0, 00:18:58.489 "state": "online", 00:18:58.489 "raid_level": "raid1", 00:18:58.489 "superblock": true, 00:18:58.489 "num_base_bdevs": 2, 00:18:58.489 "num_base_bdevs_discovered": 2, 00:18:58.489 "num_base_bdevs_operational": 2, 00:18:58.489 "process": { 00:18:58.489 "type": "rebuild", 00:18:58.489 "target": "spare", 00:18:58.489 "progress": { 00:18:58.489 "blocks": 14336, 00:18:58.489 "percent": 22 00:18:58.489 } 00:18:58.489 }, 00:18:58.489 "base_bdevs_list": [ 00:18:58.489 { 00:18:58.489 "name": "spare", 00:18:58.489 "uuid": "a852eef1-2067-5419-879f-08ba4fa80681", 00:18:58.489 "is_configured": true, 00:18:58.489 "data_offset": 2048, 00:18:58.489 "data_size": 63488 00:18:58.489 }, 00:18:58.489 { 00:18:58.489 "name": "BaseBdev2", 00:18:58.489 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:18:58.489 "is_configured": true, 00:18:58.489 "data_offset": 2048, 00:18:58.489 "data_size": 63488 00:18:58.489 } 00:18:58.489 ] 00:18:58.489 }' 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:18:58.489 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=784 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.489 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.748 [2024-08-14 06:50:25.798273] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:58.748 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:58.748 "name": "raid_bdev1", 00:18:58.748 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:18:58.748 "strip_size_kb": 0, 00:18:58.748 "state": "online", 00:18:58.748 "raid_level": "raid1", 00:18:58.748 "superblock": true, 00:18:58.748 "num_base_bdevs": 2, 00:18:58.748 "num_base_bdevs_discovered": 2, 00:18:58.748 "num_base_bdevs_operational": 2, 00:18:58.748 "process": { 00:18:58.748 "type": "rebuild", 00:18:58.748 "target": "spare", 00:18:58.748 "progress": { 00:18:58.748 "blocks": 20480, 00:18:58.748 "percent": 32 00:18:58.748 } 00:18:58.748 }, 00:18:58.748 "base_bdevs_list": [ 00:18:58.748 { 00:18:58.748 "name": "spare", 00:18:58.748 "uuid": "a852eef1-2067-5419-879f-08ba4fa80681", 00:18:58.748 "is_configured": true, 00:18:58.748 "data_offset": 2048, 00:18:58.748 "data_size": 63488 00:18:58.748 }, 00:18:58.748 { 00:18:58.748 "name": "BaseBdev2", 00:18:58.748 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:18:58.748 "is_configured": true, 00:18:58.748 "data_offset": 2048, 00:18:58.748 "data_size": 63488 00:18:58.748 } 00:18:58.748 ] 00:18:58.748 }' 00:18:58.748 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:58.748 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:58.748 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:58.748 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:58.748 06:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:18:59.007 [2024-08-14 06:50:26.026076] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:59.266 [2024-08-14 06:50:26.383347] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:59.524 [2024-08-14 06:50:26.748080] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:59.782 [2024-08-14 06:50:26.973912] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:59.782 06:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:18:59.782 06:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.782 06:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:59.782 06:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:59.782 06:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:59.782 06:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:59.782 06:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.782 06:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.040 06:50:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:00.040 "name": "raid_bdev1", 00:19:00.040 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:00.040 "strip_size_kb": 0, 00:19:00.040 "state": "online", 00:19:00.040 "raid_level": "raid1", 00:19:00.040 "superblock": true, 00:19:00.040 "num_base_bdevs": 2, 00:19:00.040 "num_base_bdevs_discovered": 2, 00:19:00.040 "num_base_bdevs_operational": 2, 00:19:00.040 "process": { 00:19:00.040 "type": "rebuild", 00:19:00.040 "target": "spare", 00:19:00.040 "progress": { 00:19:00.040 "blocks": 34816, 00:19:00.040 "percent": 54 00:19:00.040 } 00:19:00.040 }, 00:19:00.040 "base_bdevs_list": [ 00:19:00.041 { 00:19:00.041 "name": "spare", 00:19:00.041 "uuid": "a852eef1-2067-5419-879f-08ba4fa80681", 00:19:00.041 "is_configured": true, 00:19:00.041 "data_offset": 2048, 00:19:00.041 "data_size": 63488 00:19:00.041 }, 00:19:00.041 { 00:19:00.041 "name": "BaseBdev2", 00:19:00.041 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:00.041 "is_configured": true, 00:19:00.041 "data_offset": 2048, 00:19:00.041 "data_size": 63488 00:19:00.041 } 00:19:00.041 ] 00:19:00.041 }' 00:19:00.041 06:50:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:00.041 06:50:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.041 06:50:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:00.299 06:50:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.299 06:50:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:19:00.299 [2024-08-14 06:50:27.361712] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:19:00.299 [2024-08-14 06:50:27.362306] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:19:00.557 [2024-08-14 06:50:27.798279] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:00.815 [2024-08-14 06:50:27.900606] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:00.815 [2024-08-14 06:50:27.901033] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:01.381 06:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:19:01.381 06:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:01.381 06:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:01.381 06:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:01.381 06:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:01.381 06:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:01.381 06:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.381 06:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.381 [2024-08-14 06:50:28.456874] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:19:01.381 06:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:01.381 "name": "raid_bdev1", 00:19:01.381 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:01.381 "strip_size_kb": 0, 00:19:01.381 "state": "online", 00:19:01.381 "raid_level": "raid1", 00:19:01.381 "superblock": true, 00:19:01.381 "num_base_bdevs": 2, 00:19:01.381 "num_base_bdevs_discovered": 2, 00:19:01.381 "num_base_bdevs_operational": 2, 00:19:01.381 "process": { 00:19:01.381 "type": "rebuild", 00:19:01.381 "target": "spare", 00:19:01.381 "progress": { 00:19:01.381 "blocks": 57344, 00:19:01.381 "percent": 90 00:19:01.381 } 00:19:01.381 }, 00:19:01.381 "base_bdevs_list": [ 00:19:01.381 { 00:19:01.381 "name": "spare", 00:19:01.381 "uuid": "a852eef1-2067-5419-879f-08ba4fa80681", 00:19:01.381 "is_configured": true, 00:19:01.381 "data_offset": 2048, 00:19:01.381 "data_size": 63488 00:19:01.381 }, 00:19:01.381 { 00:19:01.381 "name": "BaseBdev2", 00:19:01.381 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:01.381 "is_configured": true, 00:19:01.381 "data_offset": 2048, 00:19:01.381 "data_size": 63488 00:19:01.381 } 00:19:01.381 ] 00:19:01.381 }' 00:19:01.381 06:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:01.640 06:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:01.640 [2024-08-14 06:50:28.681954] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:19:01.640 06:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:01.640 06:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.640 06:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:19:01.898 [2024-08-14 06:50:29.006200] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:01.898 [2024-08-14 06:50:29.112864] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:01.898 [2024-08-14 06:50:29.116039] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.833 06:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:19:02.833 06:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.833 06:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:02.833 06:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:02.833 06:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:02.833 06:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:02.833 06:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.833 06:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.833 06:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:02.833 "name": "raid_bdev1", 00:19:02.833 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:02.833 "strip_size_kb": 0, 00:19:02.833 "state": "online", 00:19:02.833 "raid_level": "raid1", 00:19:02.833 "superblock": true, 00:19:02.833 "num_base_bdevs": 2, 00:19:02.833 "num_base_bdevs_discovered": 2, 00:19:02.833 "num_base_bdevs_operational": 2, 00:19:02.833 "base_bdevs_list": [ 00:19:02.833 { 00:19:02.833 "name": "spare", 00:19:02.833 "uuid": "a852eef1-2067-5419-879f-08ba4fa80681", 00:19:02.833 "is_configured": true, 00:19:02.833 "data_offset": 2048, 00:19:02.833 "data_size": 63488 00:19:02.833 }, 00:19:02.833 { 00:19:02.833 "name": "BaseBdev2", 00:19:02.833 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:02.833 "is_configured": true, 00:19:02.833 "data_offset": 2048, 00:19:02.833 "data_size": 63488 00:19:02.833 } 00:19:02.833 ] 00:19:02.833 }' 00:19:02.833 06:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:02.833 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:02.833 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:02.833 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:02.833 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:19:02.833 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:02.833 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:02.833 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:02.833 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:02.833 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:02.833 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.833 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.091 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:03.091 "name": "raid_bdev1", 00:19:03.091 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:03.091 "strip_size_kb": 0, 00:19:03.091 "state": "online", 00:19:03.091 "raid_level": "raid1", 00:19:03.091 "superblock": true, 00:19:03.091 "num_base_bdevs": 2, 00:19:03.091 "num_base_bdevs_discovered": 2, 00:19:03.091 "num_base_bdevs_operational": 2, 00:19:03.091 "base_bdevs_list": [ 00:19:03.091 { 00:19:03.091 "name": "spare", 00:19:03.091 "uuid": "a852eef1-2067-5419-879f-08ba4fa80681", 00:19:03.091 "is_configured": true, 00:19:03.091 "data_offset": 2048, 00:19:03.091 "data_size": 63488 00:19:03.091 }, 00:19:03.091 { 00:19:03.091 "name": "BaseBdev2", 00:19:03.091 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:03.091 "is_configured": true, 00:19:03.091 "data_offset": 2048, 00:19:03.091 "data_size": 63488 00:19:03.091 } 00:19:03.091 ] 00:19:03.091 }' 00:19:03.091 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:03.349 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:03.349 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:03.349 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:03.349 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:03.349 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:03.349 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:03.349 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:03.349 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:03.349 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:03.349 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.349 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.349 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.349 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.349 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.349 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.607 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.607 "name": "raid_bdev1", 00:19:03.607 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:03.607 "strip_size_kb": 0, 00:19:03.607 "state": "online", 00:19:03.607 "raid_level": "raid1", 00:19:03.607 "superblock": true, 00:19:03.607 "num_base_bdevs": 2, 00:19:03.607 "num_base_bdevs_discovered": 2, 00:19:03.607 "num_base_bdevs_operational": 2, 00:19:03.607 "base_bdevs_list": [ 00:19:03.607 { 00:19:03.607 "name": "spare", 00:19:03.607 "uuid": "a852eef1-2067-5419-879f-08ba4fa80681", 00:19:03.607 "is_configured": true, 00:19:03.607 "data_offset": 2048, 00:19:03.607 "data_size": 63488 00:19:03.607 }, 00:19:03.607 { 00:19:03.607 "name": "BaseBdev2", 00:19:03.607 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:03.607 "is_configured": true, 00:19:03.607 "data_offset": 2048, 00:19:03.607 "data_size": 63488 00:19:03.607 } 00:19:03.607 ] 00:19:03.607 }' 00:19:03.607 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.607 06:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:04.174 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:04.432 [2024-08-14 06:50:31.505346] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.432 [2024-08-14 06:50:31.505485] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:04.432 00:19:04.432 Latency(us) 00:19:04.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.432 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:04.432 raid_bdev1 : 11.80 96.75 290.24 0.00 0.00 14320.86 345.21 118136.51 00:19:04.432 =================================================================================================================== 00:19:04.432 Total : 96.75 290.24 0.00 0.00 14320.86 345.21 118136.51 00:19:04.432 [2024-08-14 06:50:31.609717] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.432 0 00:19:04.432 [2024-08-14 06:50:31.609838] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.432 [2024-08-14 06:50:31.609967] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.432 [2024-08-14 06:50:31.609996] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:19:04.432 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:19:04.432 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.690 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:19:04.690 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:19:04.690 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:19:04.690 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:04.690 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:04.690 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:04.690 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:04.690 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:04.690 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:04.690 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:04.690 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:04.690 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:04.690 06:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:04.948 /dev/nbd0 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:04.948 1+0 records in 00:19:04.948 1+0 records out 00:19:04.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238514 s, 17.2 MB/s 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:04.948 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:05.206 /dev/nbd1 00:19:05.206 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:05.206 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:05.206 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:19:05.206 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:19:05.207 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:19:05.207 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:19:05.207 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:19:05.207 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:19:05.207 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:19:05.207 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:19:05.207 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:05.207 1+0 records in 00:19:05.207 1+0 records out 00:19:05.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266744 s, 15.4 MB/s 00:19:05.207 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.207 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:19:05.207 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.207 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:19:05.207 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:19:05.207 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:05.207 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:05.207 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:05.464 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:05.464 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:05.464 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:05.464 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:05.464 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:05.464 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:05.464 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:05.722 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:05.722 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:05.722 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:05.722 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:05.723 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:05.723 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:05.723 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:05.723 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:05.723 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:05.723 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:05.723 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:05.723 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:05.723 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:05.723 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:05.723 06:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:06.000 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:06.000 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:06.000 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:06.000 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:06.000 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:06.000 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:06.000 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:06.000 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:06.000 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:19:06.000 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:06.259 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:06.259 [2024-08-14 06:50:33.493355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:06.259 [2024-08-14 06:50:33.493524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.259 [2024-08-14 06:50:33.493578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:06.259 [2024-08-14 06:50:33.493603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.259 [2024-08-14 06:50:33.496136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.259 spare 00:19:06.259 [2024-08-14 06:50:33.496264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:06.259 [2024-08-14 06:50:33.496395] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:06.259 [2024-08-14 06:50:33.496446] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:06.259 [2024-08-14 06:50:33.496601] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:06.517 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:06.518 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:06.518 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:06.518 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:06.518 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:06.518 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:06.518 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:06.518 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:06.518 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:06.518 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:06.518 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.518 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.518 [2024-08-14 06:50:33.596526] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:19:06.518 [2024-08-14 06:50:33.596662] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:06.518 [2024-08-14 06:50:33.597098] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027720 00:19:06.518 [2024-08-14 06:50:33.597391] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:19:06.518 [2024-08-14 06:50:33.597474] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:19:06.518 [2024-08-14 06:50:33.597722] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.518 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:06.518 "name": "raid_bdev1", 00:19:06.518 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:06.518 "strip_size_kb": 0, 00:19:06.518 "state": "online", 00:19:06.518 "raid_level": "raid1", 00:19:06.518 "superblock": true, 00:19:06.518 "num_base_bdevs": 2, 00:19:06.518 "num_base_bdevs_discovered": 2, 00:19:06.518 "num_base_bdevs_operational": 2, 00:19:06.518 "base_bdevs_list": [ 00:19:06.518 { 00:19:06.518 "name": "spare", 00:19:06.518 "uuid": "a852eef1-2067-5419-879f-08ba4fa80681", 00:19:06.518 "is_configured": true, 00:19:06.518 "data_offset": 2048, 00:19:06.518 "data_size": 63488 00:19:06.518 }, 00:19:06.518 { 00:19:06.518 "name": "BaseBdev2", 00:19:06.518 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:06.518 "is_configured": true, 00:19:06.518 "data_offset": 2048, 00:19:06.518 "data_size": 63488 00:19:06.518 } 00:19:06.518 ] 00:19:06.518 }' 00:19:06.518 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:06.518 06:50:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:07.451 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:07.451 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:07.451 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:07.451 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:07.451 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:07.451 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.451 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.451 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:07.451 "name": "raid_bdev1", 00:19:07.451 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:07.451 "strip_size_kb": 0, 00:19:07.451 "state": "online", 00:19:07.451 "raid_level": "raid1", 00:19:07.451 "superblock": true, 00:19:07.451 "num_base_bdevs": 2, 00:19:07.451 "num_base_bdevs_discovered": 2, 00:19:07.451 "num_base_bdevs_operational": 2, 00:19:07.451 "base_bdevs_list": [ 00:19:07.451 { 00:19:07.451 "name": "spare", 00:19:07.451 "uuid": "a852eef1-2067-5419-879f-08ba4fa80681", 00:19:07.451 "is_configured": true, 00:19:07.451 "data_offset": 2048, 00:19:07.451 "data_size": 63488 00:19:07.451 }, 00:19:07.451 { 00:19:07.451 "name": "BaseBdev2", 00:19:07.451 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:07.451 "is_configured": true, 00:19:07.451 "data_offset": 2048, 00:19:07.451 "data_size": 63488 00:19:07.451 } 00:19:07.451 ] 00:19:07.451 }' 00:19:07.451 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:07.451 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:07.451 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:07.709 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:07.709 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.709 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:07.968 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.968 06:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:07.968 [2024-08-14 06:50:35.219467] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:08.226 06:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:08.226 06:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:08.226 06:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:08.226 06:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:08.226 06:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:08.226 06:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:08.226 06:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.226 06:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.226 06:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.226 06:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.226 06:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.226 06:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.485 06:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.485 "name": "raid_bdev1", 00:19:08.485 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:08.485 "strip_size_kb": 0, 00:19:08.485 "state": "online", 00:19:08.485 "raid_level": "raid1", 00:19:08.485 "superblock": true, 00:19:08.485 "num_base_bdevs": 2, 00:19:08.485 "num_base_bdevs_discovered": 1, 00:19:08.485 "num_base_bdevs_operational": 1, 00:19:08.485 "base_bdevs_list": [ 00:19:08.485 { 00:19:08.485 "name": null, 00:19:08.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.485 "is_configured": false, 00:19:08.485 "data_offset": 2048, 00:19:08.485 "data_size": 63488 00:19:08.485 }, 00:19:08.485 { 00:19:08.485 "name": "BaseBdev2", 00:19:08.485 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:08.485 "is_configured": true, 00:19:08.485 "data_offset": 2048, 00:19:08.485 "data_size": 63488 00:19:08.485 } 00:19:08.485 ] 00:19:08.485 }' 00:19:08.485 06:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.485 06:50:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:09.051 06:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:09.051 [2024-08-14 06:50:36.258367] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:09.051 [2024-08-14 06:50:36.258705] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:09.051 [2024-08-14 06:50:36.258730] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:09.051 [2024-08-14 06:50:36.258792] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:09.051 [2024-08-14 06:50:36.263497] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000277f0 00:19:09.051 [2024-08-14 06:50:36.265642] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:09.051 06:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:19:10.425 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.425 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:10.425 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:10.425 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:10.425 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:10.425 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.425 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.425 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:10.425 "name": "raid_bdev1", 00:19:10.425 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:10.425 "strip_size_kb": 0, 00:19:10.425 "state": "online", 00:19:10.425 "raid_level": "raid1", 00:19:10.425 "superblock": true, 00:19:10.425 "num_base_bdevs": 2, 00:19:10.425 "num_base_bdevs_discovered": 2, 00:19:10.425 "num_base_bdevs_operational": 2, 00:19:10.425 "process": { 00:19:10.425 "type": "rebuild", 00:19:10.425 "target": "spare", 00:19:10.425 "progress": { 00:19:10.425 "blocks": 24576, 00:19:10.425 "percent": 38 00:19:10.425 } 00:19:10.425 }, 00:19:10.425 "base_bdevs_list": [ 00:19:10.425 { 00:19:10.425 "name": "spare", 00:19:10.425 "uuid": "a852eef1-2067-5419-879f-08ba4fa80681", 00:19:10.425 "is_configured": true, 00:19:10.425 "data_offset": 2048, 00:19:10.425 "data_size": 63488 00:19:10.425 }, 00:19:10.425 { 00:19:10.425 "name": "BaseBdev2", 00:19:10.425 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:10.425 "is_configured": true, 00:19:10.425 "data_offset": 2048, 00:19:10.425 "data_size": 63488 00:19:10.425 } 00:19:10.425 ] 00:19:10.425 }' 00:19:10.425 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:10.425 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.425 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:10.425 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.425 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:10.683 [2024-08-14 06:50:37.813973] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:10.683 [2024-08-14 06:50:37.872907] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:10.683 [2024-08-14 06:50:37.872996] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.683 [2024-08-14 06:50:37.873035] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:10.683 [2024-08-14 06:50:37.873044] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:10.683 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:10.683 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:10.683 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:10.683 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:10.683 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:10.683 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:10.683 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:10.683 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:10.683 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:10.683 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:10.683 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.683 06:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.941 06:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:10.941 "name": "raid_bdev1", 00:19:10.941 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:10.941 "strip_size_kb": 0, 00:19:10.941 "state": "online", 00:19:10.941 "raid_level": "raid1", 00:19:10.941 "superblock": true, 00:19:10.941 "num_base_bdevs": 2, 00:19:10.941 "num_base_bdevs_discovered": 1, 00:19:10.941 "num_base_bdevs_operational": 1, 00:19:10.941 "base_bdevs_list": [ 00:19:10.941 { 00:19:10.941 "name": null, 00:19:10.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.941 "is_configured": false, 00:19:10.941 "data_offset": 2048, 00:19:10.941 "data_size": 63488 00:19:10.941 }, 00:19:10.941 { 00:19:10.941 "name": "BaseBdev2", 00:19:10.941 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:10.941 "is_configured": true, 00:19:10.941 "data_offset": 2048, 00:19:10.941 "data_size": 63488 00:19:10.941 } 00:19:10.941 ] 00:19:10.941 }' 00:19:10.941 06:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:10.941 06:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:11.509 06:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:11.769 [2024-08-14 06:50:38.920914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:11.769 [2024-08-14 06:50:38.921003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.769 [2024-08-14 06:50:38.921034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:11.769 [2024-08-14 06:50:38.921044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.769 [2024-08-14 06:50:38.921572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.769 [2024-08-14 06:50:38.921600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:11.769 [2024-08-14 06:50:38.921709] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:11.769 [2024-08-14 06:50:38.921723] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:11.769 [2024-08-14 06:50:38.921738] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:11.769 [2024-08-14 06:50:38.921773] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:11.769 [2024-08-14 06:50:38.926458] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:19:11.769 spare 00:19:11.769 [2024-08-14 06:50:38.928626] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:11.769 06:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:19:12.712 06:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.712 06:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:12.712 06:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:12.712 06:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:12.712 06:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:12.712 06:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.712 06:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.974 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:12.974 "name": "raid_bdev1", 00:19:12.974 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:12.974 "strip_size_kb": 0, 00:19:12.974 "state": "online", 00:19:12.974 "raid_level": "raid1", 00:19:12.974 "superblock": true, 00:19:12.974 "num_base_bdevs": 2, 00:19:12.974 "num_base_bdevs_discovered": 2, 00:19:12.974 "num_base_bdevs_operational": 2, 00:19:12.974 "process": { 00:19:12.974 "type": "rebuild", 00:19:12.974 "target": "spare", 00:19:12.974 "progress": { 00:19:12.974 "blocks": 24576, 00:19:12.974 "percent": 38 00:19:12.974 } 00:19:12.974 }, 00:19:12.974 "base_bdevs_list": [ 00:19:12.974 { 00:19:12.974 "name": "spare", 00:19:12.974 "uuid": "a852eef1-2067-5419-879f-08ba4fa80681", 00:19:12.974 "is_configured": true, 00:19:12.974 "data_offset": 2048, 00:19:12.974 "data_size": 63488 00:19:12.974 }, 00:19:12.974 { 00:19:12.974 "name": "BaseBdev2", 00:19:12.974 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:12.974 "is_configured": true, 00:19:12.974 "data_offset": 2048, 00:19:12.974 "data_size": 63488 00:19:12.974 } 00:19:12.974 ] 00:19:12.974 }' 00:19:12.974 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:13.233 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.233 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:13.233 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.233 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:13.491 [2024-08-14 06:50:40.489403] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.491 [2024-08-14 06:50:40.535943] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:13.491 [2024-08-14 06:50:40.536137] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.491 [2024-08-14 06:50:40.536222] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.491 [2024-08-14 06:50:40.536279] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:13.491 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:13.491 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:13.491 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:13.491 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:13.491 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:13.491 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:13.491 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.491 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.491 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.491 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.491 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.491 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.750 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:13.750 "name": "raid_bdev1", 00:19:13.750 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:13.750 "strip_size_kb": 0, 00:19:13.750 "state": "online", 00:19:13.750 "raid_level": "raid1", 00:19:13.750 "superblock": true, 00:19:13.750 "num_base_bdevs": 2, 00:19:13.750 "num_base_bdevs_discovered": 1, 00:19:13.750 "num_base_bdevs_operational": 1, 00:19:13.750 "base_bdevs_list": [ 00:19:13.750 { 00:19:13.750 "name": null, 00:19:13.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.750 "is_configured": false, 00:19:13.750 "data_offset": 2048, 00:19:13.750 "data_size": 63488 00:19:13.750 }, 00:19:13.750 { 00:19:13.750 "name": "BaseBdev2", 00:19:13.750 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:13.750 "is_configured": true, 00:19:13.750 "data_offset": 2048, 00:19:13.750 "data_size": 63488 00:19:13.750 } 00:19:13.750 ] 00:19:13.750 }' 00:19:13.750 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:13.750 06:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:14.318 06:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:14.318 06:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:14.318 06:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:14.318 06:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:14.318 06:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:14.318 06:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.318 06:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.578 06:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:14.578 "name": "raid_bdev1", 00:19:14.578 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:14.578 "strip_size_kb": 0, 00:19:14.578 "state": "online", 00:19:14.578 "raid_level": "raid1", 00:19:14.578 "superblock": true, 00:19:14.578 "num_base_bdevs": 2, 00:19:14.578 "num_base_bdevs_discovered": 1, 00:19:14.578 "num_base_bdevs_operational": 1, 00:19:14.578 "base_bdevs_list": [ 00:19:14.578 { 00:19:14.578 "name": null, 00:19:14.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.578 "is_configured": false, 00:19:14.578 "data_offset": 2048, 00:19:14.578 "data_size": 63488 00:19:14.578 }, 00:19:14.578 { 00:19:14.578 "name": "BaseBdev2", 00:19:14.578 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:14.578 "is_configured": true, 00:19:14.578 "data_offset": 2048, 00:19:14.578 "data_size": 63488 00:19:14.578 } 00:19:14.578 ] 00:19:14.578 }' 00:19:14.578 06:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:14.578 06:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:14.578 06:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:14.578 06:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:14.578 06:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:14.837 06:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:15.096 [2024-08-14 06:50:42.274710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:15.096 [2024-08-14 06:50:42.274784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.096 [2024-08-14 06:50:42.274810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:15.096 [2024-08-14 06:50:42.274823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.096 [2024-08-14 06:50:42.275331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.096 [2024-08-14 06:50:42.275416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:15.096 [2024-08-14 06:50:42.275531] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:15.096 [2024-08-14 06:50:42.275553] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:15.096 [2024-08-14 06:50:42.275562] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:15.096 BaseBdev1 00:19:15.096 06:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:19:16.473 06:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:16.473 06:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:16.473 06:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:16.473 06:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:16.473 06:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:16.473 06:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:16.473 06:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:16.473 06:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:16.473 06:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:16.473 06:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:16.473 06:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.473 06:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.473 06:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:16.473 "name": "raid_bdev1", 00:19:16.473 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:16.473 "strip_size_kb": 0, 00:19:16.473 "state": "online", 00:19:16.473 "raid_level": "raid1", 00:19:16.473 "superblock": true, 00:19:16.473 "num_base_bdevs": 2, 00:19:16.473 "num_base_bdevs_discovered": 1, 00:19:16.473 "num_base_bdevs_operational": 1, 00:19:16.473 "base_bdevs_list": [ 00:19:16.473 { 00:19:16.473 "name": null, 00:19:16.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.473 "is_configured": false, 00:19:16.473 "data_offset": 2048, 00:19:16.473 "data_size": 63488 00:19:16.473 }, 00:19:16.473 { 00:19:16.473 "name": "BaseBdev2", 00:19:16.473 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:16.473 "is_configured": true, 00:19:16.473 "data_offset": 2048, 00:19:16.473 "data_size": 63488 00:19:16.473 } 00:19:16.473 ] 00:19:16.473 }' 00:19:16.473 06:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:16.473 06:50:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.039 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:17.039 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:17.039 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:17.039 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:17.039 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:17.039 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.039 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:17.298 "name": "raid_bdev1", 00:19:17.298 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:17.298 "strip_size_kb": 0, 00:19:17.298 "state": "online", 00:19:17.298 "raid_level": "raid1", 00:19:17.298 "superblock": true, 00:19:17.298 "num_base_bdevs": 2, 00:19:17.298 "num_base_bdevs_discovered": 1, 00:19:17.298 "num_base_bdevs_operational": 1, 00:19:17.298 "base_bdevs_list": [ 00:19:17.298 { 00:19:17.298 "name": null, 00:19:17.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.298 "is_configured": false, 00:19:17.298 "data_offset": 2048, 00:19:17.298 "data_size": 63488 00:19:17.298 }, 00:19:17.298 { 00:19:17.298 "name": "BaseBdev2", 00:19:17.298 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:17.298 "is_configured": true, 00:19:17.298 "data_offset": 2048, 00:19:17.298 "data_size": 63488 00:19:17.298 } 00:19:17.298 ] 00:19:17.298 }' 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@646 -- # local es=0 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:17.298 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:17.556 [2024-08-14 06:50:44.651561] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:17.556 [2024-08-14 06:50:44.651745] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:17.556 [2024-08-14 06:50:44.651809] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:17.556 request: 00:19:17.556 { 00:19:17.556 "base_bdev": "BaseBdev1", 00:19:17.556 "raid_bdev": "raid_bdev1", 00:19:17.556 "method": "bdev_raid_add_base_bdev", 00:19:17.556 "req_id": 1 00:19:17.556 } 00:19:17.556 Got JSON-RPC error response 00:19:17.556 response: 00:19:17.556 { 00:19:17.556 "code": -22, 00:19:17.556 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:17.556 } 00:19:17.557 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@649 -- # es=1 00:19:17.557 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:19:17.557 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:19:17.557 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:19:17.557 06:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:19:18.492 06:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:18.492 06:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:18.492 06:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:18.492 06:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:18.492 06:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:18.492 06:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:18.492 06:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.492 06:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.492 06:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.492 06:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.492 06:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.492 06:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.751 06:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.751 "name": "raid_bdev1", 00:19:18.751 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:18.751 "strip_size_kb": 0, 00:19:18.751 "state": "online", 00:19:18.751 "raid_level": "raid1", 00:19:18.751 "superblock": true, 00:19:18.751 "num_base_bdevs": 2, 00:19:18.751 "num_base_bdevs_discovered": 1, 00:19:18.751 "num_base_bdevs_operational": 1, 00:19:18.751 "base_bdevs_list": [ 00:19:18.751 { 00:19:18.751 "name": null, 00:19:18.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.751 "is_configured": false, 00:19:18.751 "data_offset": 2048, 00:19:18.751 "data_size": 63488 00:19:18.751 }, 00:19:18.751 { 00:19:18.751 "name": "BaseBdev2", 00:19:18.751 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:18.751 "is_configured": true, 00:19:18.751 "data_offset": 2048, 00:19:18.751 "data_size": 63488 00:19:18.751 } 00:19:18.751 ] 00:19:18.751 }' 00:19:18.751 06:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.751 06:50:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.397 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:19.397 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:19.397 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:19.397 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:19.397 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:19.397 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.397 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:19.656 "name": "raid_bdev1", 00:19:19.656 "uuid": "a90d4c83-4201-48e5-b485-e06dc8f6b99c", 00:19:19.656 "strip_size_kb": 0, 00:19:19.656 "state": "online", 00:19:19.656 "raid_level": "raid1", 00:19:19.656 "superblock": true, 00:19:19.656 "num_base_bdevs": 2, 00:19:19.656 "num_base_bdevs_discovered": 1, 00:19:19.656 "num_base_bdevs_operational": 1, 00:19:19.656 "base_bdevs_list": [ 00:19:19.656 { 00:19:19.656 "name": null, 00:19:19.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.656 "is_configured": false, 00:19:19.656 "data_offset": 2048, 00:19:19.656 "data_size": 63488 00:19:19.656 }, 00:19:19.656 { 00:19:19.656 "name": "BaseBdev2", 00:19:19.656 "uuid": "ca4b2f7f-8efd-599f-9562-0ed7847cb871", 00:19:19.656 "is_configured": true, 00:19:19.656 "data_offset": 2048, 00:19:19.656 "data_size": 63488 00:19:19.656 } 00:19:19.656 ] 00:19:19.656 }' 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 94285 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@946 -- # '[' -z 94285 ']' 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # kill -0 94285 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # uname 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94285 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:19.656 killing process with pid 94285 00:19:19.656 Received shutdown signal, test time was about 27.095874 seconds 00:19:19.656 00:19:19.656 Latency(us) 00:19:19.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.656 =================================================================================================================== 00:19:19.656 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94285' 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@965 -- # kill 94285 00:19:19.656 [2024-08-14 06:50:46.867601] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:19.656 06:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # wait 94285 00:19:19.656 [2024-08-14 06:50:46.867778] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:19.656 [2024-08-14 06:50:46.867846] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:19.656 [2024-08-14 06:50:46.867863] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:19:19.656 [2024-08-14 06:50:46.895202] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:19.916 06:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:19:19.916 00:19:19.916 real 0m31.742s 00:19:19.916 user 0m50.426s 00:19:19.916 sys 0m3.565s 00:19:19.916 ************************************ 00:19:19.916 END TEST raid_rebuild_test_sb_io 00:19:19.916 ************************************ 00:19:19.916 06:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:19.916 06:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.176 06:50:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:19:20.176 06:50:47 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:19:20.176 06:50:47 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:19:20.176 06:50:47 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:20.176 06:50:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:20.176 ************************************ 00:19:20.176 START TEST raid_rebuild_test 00:19:20.176 ************************************ 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 false false true 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:20.176 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=95110 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 95110 /var/tmp/spdk-raid.sock 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 95110 ']' 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:20.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:20.177 06:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.177 [2024-08-14 06:50:47.314881] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:19:20.177 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:20.177 Zero copy mechanism will not be used. 00:19:20.177 [2024-08-14 06:50:47.315126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95110 ] 00:19:20.437 [2024-08-14 06:50:47.447591] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.437 [2024-08-14 06:50:47.500989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.437 [2024-08-14 06:50:47.546499] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:20.437 [2024-08-14 06:50:47.546635] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:21.006 06:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:21.006 06:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:19:21.006 06:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:21.006 06:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:21.265 BaseBdev1_malloc 00:19:21.265 06:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:21.525 [2024-08-14 06:50:48.724622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:21.525 [2024-08-14 06:50:48.724717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.525 [2024-08-14 06:50:48.724756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:19:21.525 [2024-08-14 06:50:48.724770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.525 [2024-08-14 06:50:48.727322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.525 BaseBdev1 00:19:21.525 [2024-08-14 06:50:48.727426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:21.525 06:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:21.525 06:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:21.784 BaseBdev2_malloc 00:19:21.784 06:50:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:22.043 [2024-08-14 06:50:49.285356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:22.043 [2024-08-14 06:50:49.285454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.044 [2024-08-14 06:50:49.285482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:22.044 [2024-08-14 06:50:49.285495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.044 [2024-08-14 06:50:49.288026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.044 [2024-08-14 06:50:49.288080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:22.044 BaseBdev2 00:19:22.302 06:50:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:22.302 06:50:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:22.302 BaseBdev3_malloc 00:19:22.560 06:50:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:22.819 [2024-08-14 06:50:49.865581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:22.819 [2024-08-14 06:50:49.865773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.819 [2024-08-14 06:50:49.865809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:22.819 [2024-08-14 06:50:49.865823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.819 [2024-08-14 06:50:49.868328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.819 [2024-08-14 06:50:49.868373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:22.819 BaseBdev3 00:19:22.819 06:50:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:22.819 06:50:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:23.077 BaseBdev4_malloc 00:19:23.077 06:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:23.336 [2024-08-14 06:50:50.350336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:23.336 [2024-08-14 06:50:50.350425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.336 [2024-08-14 06:50:50.350452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:23.336 [2024-08-14 06:50:50.350467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.336 [2024-08-14 06:50:50.352898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.336 [2024-08-14 06:50:50.352943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:23.336 BaseBdev4 00:19:23.336 06:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:23.336 spare_malloc 00:19:23.595 06:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:23.595 spare_delay 00:19:23.595 06:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:23.854 [2024-08-14 06:50:51.062753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:23.854 [2024-08-14 06:50:51.062844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.854 [2024-08-14 06:50:51.062872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:23.854 [2024-08-14 06:50:51.062885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.854 [2024-08-14 06:50:51.065392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.854 [2024-08-14 06:50:51.065439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:23.854 spare 00:19:23.854 06:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:19:24.113 [2024-08-14 06:50:51.274612] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.113 [2024-08-14 06:50:51.277399] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:24.113 [2024-08-14 06:50:51.277533] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:24.113 [2024-08-14 06:50:51.277598] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:24.113 [2024-08-14 06:50:51.277734] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:19:24.113 [2024-08-14 06:50:51.277749] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:24.113 [2024-08-14 06:50:51.278359] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:19:24.113 [2024-08-14 06:50:51.278615] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:19:24.113 [2024-08-14 06:50:51.278674] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:19:24.113 [2024-08-14 06:50:51.279025] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.113 06:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:24.113 06:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:24.113 06:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:24.113 06:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:24.113 06:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:24.113 06:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:24.113 06:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:24.113 06:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:24.113 06:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:24.113 06:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:24.113 06:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.113 06:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.373 06:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:24.373 "name": "raid_bdev1", 00:19:24.373 "uuid": "53e2794d-5e9d-4b6c-8756-64294a237650", 00:19:24.373 "strip_size_kb": 0, 00:19:24.373 "state": "online", 00:19:24.373 "raid_level": "raid1", 00:19:24.373 "superblock": false, 00:19:24.373 "num_base_bdevs": 4, 00:19:24.373 "num_base_bdevs_discovered": 4, 00:19:24.373 "num_base_bdevs_operational": 4, 00:19:24.373 "base_bdevs_list": [ 00:19:24.373 { 00:19:24.373 "name": "BaseBdev1", 00:19:24.373 "uuid": "1d7ea360-cab9-5ac2-9f8b-c3f713c244d8", 00:19:24.373 "is_configured": true, 00:19:24.373 "data_offset": 0, 00:19:24.373 "data_size": 65536 00:19:24.373 }, 00:19:24.373 { 00:19:24.373 "name": "BaseBdev2", 00:19:24.373 "uuid": "71e4fc19-27c5-562e-ba41-1bd437b3b3d2", 00:19:24.373 "is_configured": true, 00:19:24.373 "data_offset": 0, 00:19:24.373 "data_size": 65536 00:19:24.373 }, 00:19:24.373 { 00:19:24.373 "name": "BaseBdev3", 00:19:24.373 "uuid": "73ce2b3b-e6d2-53e0-8e35-ea5bfa890b40", 00:19:24.373 "is_configured": true, 00:19:24.373 "data_offset": 0, 00:19:24.373 "data_size": 65536 00:19:24.373 }, 00:19:24.373 { 00:19:24.373 "name": "BaseBdev4", 00:19:24.373 "uuid": "5837c267-be31-5d0a-a604-0c1cf70b4e14", 00:19:24.373 "is_configured": true, 00:19:24.373 "data_offset": 0, 00:19:24.373 "data_size": 65536 00:19:24.373 } 00:19:24.373 ] 00:19:24.373 }' 00:19:24.373 06:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:24.373 06:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.942 06:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:19:24.942 06:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:25.201 [2024-08-14 06:50:52.350485] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:25.201 06:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:19:25.201 06:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.201 06:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:25.460 06:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:19:25.460 06:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:19:25.460 06:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:19:25.460 06:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:19:25.460 06:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:25.460 06:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:25.460 06:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:25.460 06:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:25.460 06:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:25.460 06:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:25.460 06:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:25.460 06:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:25.460 06:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:25.460 06:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:25.720 [2024-08-14 06:50:52.910128] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:19:25.720 /dev/nbd0 00:19:25.720 06:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:25.720 06:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:25.720 06:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:19:25.720 06:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:19:25.720 06:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:19:25.720 06:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:19:25.720 06:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:19:25.720 06:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:19:25.720 06:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:19:25.720 06:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:19:25.720 06:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:25.720 1+0 records in 00:19:25.720 1+0 records out 00:19:25.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499683 s, 8.2 MB/s 00:19:25.720 06:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.979 06:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:19:25.979 06:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.979 06:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:19:25.979 06:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:19:25.979 06:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:25.979 06:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:25.979 06:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:19:25.979 06:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:19:25.979 06:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:31.252 65536+0 records in 00:19:31.252 65536+0 records out 00:19:31.252 33554432 bytes (34 MB, 32 MiB) copied, 5.46706 s, 6.1 MB/s 00:19:31.252 06:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:31.252 06:50:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:31.252 06:50:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:31.252 06:50:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:31.252 06:50:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:31.252 06:50:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.252 06:50:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:31.512 06:50:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:31.512 [2024-08-14 06:50:58.672313] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.512 06:50:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:31.512 06:50:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:31.512 06:50:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.512 06:50:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.512 06:50:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:31.512 06:50:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:31.512 06:50:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.512 06:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:31.771 [2024-08-14 06:50:58.855703] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:31.771 06:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:31.771 06:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:31.771 06:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:31.771 06:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:31.771 06:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:31.771 06:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:31.771 06:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:31.771 06:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:31.771 06:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:31.771 06:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:31.771 06:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.771 06:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.030 06:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.030 "name": "raid_bdev1", 00:19:32.030 "uuid": "53e2794d-5e9d-4b6c-8756-64294a237650", 00:19:32.030 "strip_size_kb": 0, 00:19:32.030 "state": "online", 00:19:32.030 "raid_level": "raid1", 00:19:32.030 "superblock": false, 00:19:32.030 "num_base_bdevs": 4, 00:19:32.030 "num_base_bdevs_discovered": 3, 00:19:32.031 "num_base_bdevs_operational": 3, 00:19:32.031 "base_bdevs_list": [ 00:19:32.031 { 00:19:32.031 "name": null, 00:19:32.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.031 "is_configured": false, 00:19:32.031 "data_offset": 0, 00:19:32.031 "data_size": 65536 00:19:32.031 }, 00:19:32.031 { 00:19:32.031 "name": "BaseBdev2", 00:19:32.031 "uuid": "71e4fc19-27c5-562e-ba41-1bd437b3b3d2", 00:19:32.031 "is_configured": true, 00:19:32.031 "data_offset": 0, 00:19:32.031 "data_size": 65536 00:19:32.031 }, 00:19:32.031 { 00:19:32.031 "name": "BaseBdev3", 00:19:32.031 "uuid": "73ce2b3b-e6d2-53e0-8e35-ea5bfa890b40", 00:19:32.031 "is_configured": true, 00:19:32.031 "data_offset": 0, 00:19:32.031 "data_size": 65536 00:19:32.031 }, 00:19:32.031 { 00:19:32.031 "name": "BaseBdev4", 00:19:32.031 "uuid": "5837c267-be31-5d0a-a604-0c1cf70b4e14", 00:19:32.031 "is_configured": true, 00:19:32.031 "data_offset": 0, 00:19:32.031 "data_size": 65536 00:19:32.031 } 00:19:32.031 ] 00:19:32.031 }' 00:19:32.031 06:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.031 06:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.599 06:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:32.599 [2024-08-14 06:50:59.806135] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:32.599 [2024-08-14 06:50:59.809576] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d063c0 00:19:32.599 [2024-08-14 06:50:59.811588] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:32.599 06:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:33.973 06:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.973 06:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:33.973 06:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:33.973 06:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:33.973 06:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:33.973 06:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.973 06:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.973 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:33.973 "name": "raid_bdev1", 00:19:33.973 "uuid": "53e2794d-5e9d-4b6c-8756-64294a237650", 00:19:33.973 "strip_size_kb": 0, 00:19:33.973 "state": "online", 00:19:33.973 "raid_level": "raid1", 00:19:33.973 "superblock": false, 00:19:33.973 "num_base_bdevs": 4, 00:19:33.973 "num_base_bdevs_discovered": 4, 00:19:33.973 "num_base_bdevs_operational": 4, 00:19:33.973 "process": { 00:19:33.973 "type": "rebuild", 00:19:33.973 "target": "spare", 00:19:33.973 "progress": { 00:19:33.973 "blocks": 24576, 00:19:33.973 "percent": 37 00:19:33.973 } 00:19:33.973 }, 00:19:33.973 "base_bdevs_list": [ 00:19:33.973 { 00:19:33.973 "name": "spare", 00:19:33.973 "uuid": "797282e4-4dbf-5d9a-9d70-b01bb6b118ad", 00:19:33.973 "is_configured": true, 00:19:33.973 "data_offset": 0, 00:19:33.973 "data_size": 65536 00:19:33.973 }, 00:19:33.973 { 00:19:33.973 "name": "BaseBdev2", 00:19:33.973 "uuid": "71e4fc19-27c5-562e-ba41-1bd437b3b3d2", 00:19:33.973 "is_configured": true, 00:19:33.973 "data_offset": 0, 00:19:33.973 "data_size": 65536 00:19:33.973 }, 00:19:33.973 { 00:19:33.973 "name": "BaseBdev3", 00:19:33.973 "uuid": "73ce2b3b-e6d2-53e0-8e35-ea5bfa890b40", 00:19:33.973 "is_configured": true, 00:19:33.973 "data_offset": 0, 00:19:33.973 "data_size": 65536 00:19:33.973 }, 00:19:33.973 { 00:19:33.973 "name": "BaseBdev4", 00:19:33.973 "uuid": "5837c267-be31-5d0a-a604-0c1cf70b4e14", 00:19:33.973 "is_configured": true, 00:19:33.973 "data_offset": 0, 00:19:33.973 "data_size": 65536 00:19:33.973 } 00:19:33.973 ] 00:19:33.973 }' 00:19:33.973 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:33.973 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:33.973 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:33.973 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:33.973 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:34.233 [2024-08-14 06:51:01.338316] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:34.233 [2024-08-14 06:51:01.419348] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:34.233 [2024-08-14 06:51:01.419447] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.233 [2024-08-14 06:51:01.419464] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:34.233 [2024-08-14 06:51:01.419475] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:34.233 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:34.233 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:34.233 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:34.233 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:34.233 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:34.233 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:34.233 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:34.233 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:34.233 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:34.233 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:34.233 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.233 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.492 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:34.492 "name": "raid_bdev1", 00:19:34.492 "uuid": "53e2794d-5e9d-4b6c-8756-64294a237650", 00:19:34.492 "strip_size_kb": 0, 00:19:34.492 "state": "online", 00:19:34.492 "raid_level": "raid1", 00:19:34.492 "superblock": false, 00:19:34.492 "num_base_bdevs": 4, 00:19:34.492 "num_base_bdevs_discovered": 3, 00:19:34.492 "num_base_bdevs_operational": 3, 00:19:34.492 "base_bdevs_list": [ 00:19:34.492 { 00:19:34.492 "name": null, 00:19:34.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.493 "is_configured": false, 00:19:34.493 "data_offset": 0, 00:19:34.493 "data_size": 65536 00:19:34.493 }, 00:19:34.493 { 00:19:34.493 "name": "BaseBdev2", 00:19:34.493 "uuid": "71e4fc19-27c5-562e-ba41-1bd437b3b3d2", 00:19:34.493 "is_configured": true, 00:19:34.493 "data_offset": 0, 00:19:34.493 "data_size": 65536 00:19:34.493 }, 00:19:34.493 { 00:19:34.493 "name": "BaseBdev3", 00:19:34.493 "uuid": "73ce2b3b-e6d2-53e0-8e35-ea5bfa890b40", 00:19:34.493 "is_configured": true, 00:19:34.493 "data_offset": 0, 00:19:34.493 "data_size": 65536 00:19:34.493 }, 00:19:34.493 { 00:19:34.493 "name": "BaseBdev4", 00:19:34.493 "uuid": "5837c267-be31-5d0a-a604-0c1cf70b4e14", 00:19:34.493 "is_configured": true, 00:19:34.493 "data_offset": 0, 00:19:34.493 "data_size": 65536 00:19:34.493 } 00:19:34.493 ] 00:19:34.493 }' 00:19:34.493 06:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:34.493 06:51:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.061 06:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:35.061 06:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:35.061 06:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:35.061 06:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:35.061 06:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:35.061 06:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.061 06:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.320 06:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:35.320 "name": "raid_bdev1", 00:19:35.320 "uuid": "53e2794d-5e9d-4b6c-8756-64294a237650", 00:19:35.320 "strip_size_kb": 0, 00:19:35.320 "state": "online", 00:19:35.320 "raid_level": "raid1", 00:19:35.320 "superblock": false, 00:19:35.320 "num_base_bdevs": 4, 00:19:35.320 "num_base_bdevs_discovered": 3, 00:19:35.320 "num_base_bdevs_operational": 3, 00:19:35.320 "base_bdevs_list": [ 00:19:35.320 { 00:19:35.320 "name": null, 00:19:35.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.320 "is_configured": false, 00:19:35.320 "data_offset": 0, 00:19:35.320 "data_size": 65536 00:19:35.320 }, 00:19:35.320 { 00:19:35.320 "name": "BaseBdev2", 00:19:35.320 "uuid": "71e4fc19-27c5-562e-ba41-1bd437b3b3d2", 00:19:35.320 "is_configured": true, 00:19:35.320 "data_offset": 0, 00:19:35.320 "data_size": 65536 00:19:35.320 }, 00:19:35.320 { 00:19:35.320 "name": "BaseBdev3", 00:19:35.320 "uuid": "73ce2b3b-e6d2-53e0-8e35-ea5bfa890b40", 00:19:35.320 "is_configured": true, 00:19:35.320 "data_offset": 0, 00:19:35.320 "data_size": 65536 00:19:35.320 }, 00:19:35.320 { 00:19:35.320 "name": "BaseBdev4", 00:19:35.320 "uuid": "5837c267-be31-5d0a-a604-0c1cf70b4e14", 00:19:35.320 "is_configured": true, 00:19:35.320 "data_offset": 0, 00:19:35.320 "data_size": 65536 00:19:35.320 } 00:19:35.320 ] 00:19:35.320 }' 00:19:35.320 06:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:35.320 06:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:35.320 06:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:35.320 06:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:35.320 06:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:35.588 [2024-08-14 06:51:02.721676] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:35.588 [2024-08-14 06:51:02.725026] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06490 00:19:35.588 [2024-08-14 06:51:02.726882] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:35.588 06:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:19:36.561 06:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.561 06:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:36.561 06:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:36.561 06:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:36.561 06:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:36.561 06:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.561 06:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.822 06:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:36.822 "name": "raid_bdev1", 00:19:36.822 "uuid": "53e2794d-5e9d-4b6c-8756-64294a237650", 00:19:36.822 "strip_size_kb": 0, 00:19:36.823 "state": "online", 00:19:36.823 "raid_level": "raid1", 00:19:36.823 "superblock": false, 00:19:36.823 "num_base_bdevs": 4, 00:19:36.823 "num_base_bdevs_discovered": 4, 00:19:36.823 "num_base_bdevs_operational": 4, 00:19:36.823 "process": { 00:19:36.823 "type": "rebuild", 00:19:36.823 "target": "spare", 00:19:36.823 "progress": { 00:19:36.823 "blocks": 24576, 00:19:36.823 "percent": 37 00:19:36.823 } 00:19:36.823 }, 00:19:36.823 "base_bdevs_list": [ 00:19:36.823 { 00:19:36.823 "name": "spare", 00:19:36.823 "uuid": "797282e4-4dbf-5d9a-9d70-b01bb6b118ad", 00:19:36.823 "is_configured": true, 00:19:36.823 "data_offset": 0, 00:19:36.823 "data_size": 65536 00:19:36.823 }, 00:19:36.823 { 00:19:36.823 "name": "BaseBdev2", 00:19:36.823 "uuid": "71e4fc19-27c5-562e-ba41-1bd437b3b3d2", 00:19:36.823 "is_configured": true, 00:19:36.823 "data_offset": 0, 00:19:36.823 "data_size": 65536 00:19:36.823 }, 00:19:36.823 { 00:19:36.823 "name": "BaseBdev3", 00:19:36.823 "uuid": "73ce2b3b-e6d2-53e0-8e35-ea5bfa890b40", 00:19:36.823 "is_configured": true, 00:19:36.823 "data_offset": 0, 00:19:36.824 "data_size": 65536 00:19:36.824 }, 00:19:36.824 { 00:19:36.824 "name": "BaseBdev4", 00:19:36.824 "uuid": "5837c267-be31-5d0a-a604-0c1cf70b4e14", 00:19:36.824 "is_configured": true, 00:19:36.824 "data_offset": 0, 00:19:36.824 "data_size": 65536 00:19:36.824 } 00:19:36.824 ] 00:19:36.824 }' 00:19:36.824 06:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:36.824 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.824 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:36.824 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.824 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:19:36.824 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:19:36.824 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:19:36.824 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:19:36.825 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:37.086 [2024-08-14 06:51:04.261734] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:37.086 [2024-08-14 06:51:04.333577] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06490 00:19:37.346 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:19:37.346 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:19:37.346 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.346 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:37.346 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:37.346 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:37.346 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:37.346 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.346 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.346 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:37.346 "name": "raid_bdev1", 00:19:37.346 "uuid": "53e2794d-5e9d-4b6c-8756-64294a237650", 00:19:37.346 "strip_size_kb": 0, 00:19:37.346 "state": "online", 00:19:37.346 "raid_level": "raid1", 00:19:37.346 "superblock": false, 00:19:37.346 "num_base_bdevs": 4, 00:19:37.346 "num_base_bdevs_discovered": 3, 00:19:37.346 "num_base_bdevs_operational": 3, 00:19:37.346 "process": { 00:19:37.346 "type": "rebuild", 00:19:37.346 "target": "spare", 00:19:37.346 "progress": { 00:19:37.346 "blocks": 36864, 00:19:37.346 "percent": 56 00:19:37.346 } 00:19:37.346 }, 00:19:37.346 "base_bdevs_list": [ 00:19:37.346 { 00:19:37.346 "name": "spare", 00:19:37.346 "uuid": "797282e4-4dbf-5d9a-9d70-b01bb6b118ad", 00:19:37.346 "is_configured": true, 00:19:37.346 "data_offset": 0, 00:19:37.346 "data_size": 65536 00:19:37.346 }, 00:19:37.346 { 00:19:37.346 "name": null, 00:19:37.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.346 "is_configured": false, 00:19:37.346 "data_offset": 0, 00:19:37.346 "data_size": 65536 00:19:37.346 }, 00:19:37.346 { 00:19:37.346 "name": "BaseBdev3", 00:19:37.346 "uuid": "73ce2b3b-e6d2-53e0-8e35-ea5bfa890b40", 00:19:37.346 "is_configured": true, 00:19:37.346 "data_offset": 0, 00:19:37.346 "data_size": 65536 00:19:37.346 }, 00:19:37.346 { 00:19:37.346 "name": "BaseBdev4", 00:19:37.346 "uuid": "5837c267-be31-5d0a-a604-0c1cf70b4e14", 00:19:37.346 "is_configured": true, 00:19:37.346 "data_offset": 0, 00:19:37.346 "data_size": 65536 00:19:37.346 } 00:19:37.346 ] 00:19:37.346 }' 00:19:37.346 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:37.605 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.605 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:37.605 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.605 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=823 00:19:37.605 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:19:37.605 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.605 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:37.605 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:37.605 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:37.605 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:37.605 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.605 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.864 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:37.864 "name": "raid_bdev1", 00:19:37.864 "uuid": "53e2794d-5e9d-4b6c-8756-64294a237650", 00:19:37.864 "strip_size_kb": 0, 00:19:37.864 "state": "online", 00:19:37.864 "raid_level": "raid1", 00:19:37.864 "superblock": false, 00:19:37.864 "num_base_bdevs": 4, 00:19:37.864 "num_base_bdevs_discovered": 3, 00:19:37.864 "num_base_bdevs_operational": 3, 00:19:37.864 "process": { 00:19:37.864 "type": "rebuild", 00:19:37.864 "target": "spare", 00:19:37.864 "progress": { 00:19:37.864 "blocks": 43008, 00:19:37.864 "percent": 65 00:19:37.864 } 00:19:37.864 }, 00:19:37.864 "base_bdevs_list": [ 00:19:37.864 { 00:19:37.864 "name": "spare", 00:19:37.864 "uuid": "797282e4-4dbf-5d9a-9d70-b01bb6b118ad", 00:19:37.864 "is_configured": true, 00:19:37.864 "data_offset": 0, 00:19:37.864 "data_size": 65536 00:19:37.864 }, 00:19:37.864 { 00:19:37.864 "name": null, 00:19:37.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.864 "is_configured": false, 00:19:37.864 "data_offset": 0, 00:19:37.864 "data_size": 65536 00:19:37.864 }, 00:19:37.864 { 00:19:37.864 "name": "BaseBdev3", 00:19:37.864 "uuid": "73ce2b3b-e6d2-53e0-8e35-ea5bfa890b40", 00:19:37.864 "is_configured": true, 00:19:37.864 "data_offset": 0, 00:19:37.864 "data_size": 65536 00:19:37.864 }, 00:19:37.864 { 00:19:37.864 "name": "BaseBdev4", 00:19:37.864 "uuid": "5837c267-be31-5d0a-a604-0c1cf70b4e14", 00:19:37.864 "is_configured": true, 00:19:37.864 "data_offset": 0, 00:19:37.864 "data_size": 65536 00:19:37.864 } 00:19:37.864 ] 00:19:37.864 }' 00:19:37.864 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:37.864 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.864 06:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:37.864 06:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.864 06:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:19:38.803 [2024-08-14 06:51:05.940941] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:38.803 [2024-08-14 06:51:05.941027] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:38.803 [2024-08-14 06:51:05.941067] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.803 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:19:38.803 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.803 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:38.803 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:38.803 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:38.803 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:38.803 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.803 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.062 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:39.062 "name": "raid_bdev1", 00:19:39.062 "uuid": "53e2794d-5e9d-4b6c-8756-64294a237650", 00:19:39.062 "strip_size_kb": 0, 00:19:39.062 "state": "online", 00:19:39.062 "raid_level": "raid1", 00:19:39.062 "superblock": false, 00:19:39.062 "num_base_bdevs": 4, 00:19:39.062 "num_base_bdevs_discovered": 3, 00:19:39.062 "num_base_bdevs_operational": 3, 00:19:39.062 "base_bdevs_list": [ 00:19:39.062 { 00:19:39.062 "name": "spare", 00:19:39.062 "uuid": "797282e4-4dbf-5d9a-9d70-b01bb6b118ad", 00:19:39.062 "is_configured": true, 00:19:39.062 "data_offset": 0, 00:19:39.062 "data_size": 65536 00:19:39.062 }, 00:19:39.062 { 00:19:39.062 "name": null, 00:19:39.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.062 "is_configured": false, 00:19:39.062 "data_offset": 0, 00:19:39.062 "data_size": 65536 00:19:39.062 }, 00:19:39.062 { 00:19:39.062 "name": "BaseBdev3", 00:19:39.062 "uuid": "73ce2b3b-e6d2-53e0-8e35-ea5bfa890b40", 00:19:39.062 "is_configured": true, 00:19:39.062 "data_offset": 0, 00:19:39.062 "data_size": 65536 00:19:39.062 }, 00:19:39.062 { 00:19:39.062 "name": "BaseBdev4", 00:19:39.062 "uuid": "5837c267-be31-5d0a-a604-0c1cf70b4e14", 00:19:39.062 "is_configured": true, 00:19:39.062 "data_offset": 0, 00:19:39.062 "data_size": 65536 00:19:39.062 } 00:19:39.062 ] 00:19:39.062 }' 00:19:39.062 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:39.062 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:39.062 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:39.321 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:39.321 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:19:39.321 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:39.321 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:39.321 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:39.321 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:39.321 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:39.321 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.322 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:39.585 "name": "raid_bdev1", 00:19:39.585 "uuid": "53e2794d-5e9d-4b6c-8756-64294a237650", 00:19:39.585 "strip_size_kb": 0, 00:19:39.585 "state": "online", 00:19:39.585 "raid_level": "raid1", 00:19:39.585 "superblock": false, 00:19:39.585 "num_base_bdevs": 4, 00:19:39.585 "num_base_bdevs_discovered": 3, 00:19:39.585 "num_base_bdevs_operational": 3, 00:19:39.585 "base_bdevs_list": [ 00:19:39.585 { 00:19:39.585 "name": "spare", 00:19:39.585 "uuid": "797282e4-4dbf-5d9a-9d70-b01bb6b118ad", 00:19:39.585 "is_configured": true, 00:19:39.585 "data_offset": 0, 00:19:39.585 "data_size": 65536 00:19:39.585 }, 00:19:39.585 { 00:19:39.585 "name": null, 00:19:39.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.585 "is_configured": false, 00:19:39.585 "data_offset": 0, 00:19:39.585 "data_size": 65536 00:19:39.585 }, 00:19:39.585 { 00:19:39.585 "name": "BaseBdev3", 00:19:39.585 "uuid": "73ce2b3b-e6d2-53e0-8e35-ea5bfa890b40", 00:19:39.585 "is_configured": true, 00:19:39.585 "data_offset": 0, 00:19:39.585 "data_size": 65536 00:19:39.585 }, 00:19:39.585 { 00:19:39.585 "name": "BaseBdev4", 00:19:39.585 "uuid": "5837c267-be31-5d0a-a604-0c1cf70b4e14", 00:19:39.585 "is_configured": true, 00:19:39.585 "data_offset": 0, 00:19:39.585 "data_size": 65536 00:19:39.585 } 00:19:39.585 ] 00:19:39.585 }' 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.585 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.853 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:39.853 "name": "raid_bdev1", 00:19:39.853 "uuid": "53e2794d-5e9d-4b6c-8756-64294a237650", 00:19:39.853 "strip_size_kb": 0, 00:19:39.853 "state": "online", 00:19:39.853 "raid_level": "raid1", 00:19:39.853 "superblock": false, 00:19:39.853 "num_base_bdevs": 4, 00:19:39.853 "num_base_bdevs_discovered": 3, 00:19:39.853 "num_base_bdevs_operational": 3, 00:19:39.853 "base_bdevs_list": [ 00:19:39.853 { 00:19:39.853 "name": "spare", 00:19:39.853 "uuid": "797282e4-4dbf-5d9a-9d70-b01bb6b118ad", 00:19:39.853 "is_configured": true, 00:19:39.853 "data_offset": 0, 00:19:39.853 "data_size": 65536 00:19:39.853 }, 00:19:39.853 { 00:19:39.853 "name": null, 00:19:39.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.853 "is_configured": false, 00:19:39.853 "data_offset": 0, 00:19:39.853 "data_size": 65536 00:19:39.853 }, 00:19:39.853 { 00:19:39.853 "name": "BaseBdev3", 00:19:39.853 "uuid": "73ce2b3b-e6d2-53e0-8e35-ea5bfa890b40", 00:19:39.853 "is_configured": true, 00:19:39.853 "data_offset": 0, 00:19:39.853 "data_size": 65536 00:19:39.853 }, 00:19:39.853 { 00:19:39.853 "name": "BaseBdev4", 00:19:39.853 "uuid": "5837c267-be31-5d0a-a604-0c1cf70b4e14", 00:19:39.853 "is_configured": true, 00:19:39.853 "data_offset": 0, 00:19:39.853 "data_size": 65536 00:19:39.853 } 00:19:39.853 ] 00:19:39.853 }' 00:19:39.853 06:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:39.853 06:51:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.494 06:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:40.494 [2024-08-14 06:51:07.742258] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:40.494 [2024-08-14 06:51:07.742311] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.494 [2024-08-14 06:51:07.742427] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.494 [2024-08-14 06:51:07.742533] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.494 [2024-08-14 06:51:07.742548] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:19:40.753 06:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.753 06:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:19:41.012 06:51:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:19:41.012 06:51:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:19:41.012 06:51:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:19:41.012 06:51:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:41.012 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:41.012 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:41.013 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:41.013 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:41.013 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:41.013 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:41.013 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:41.013 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:41.013 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:41.013 /dev/nbd0 00:19:41.013 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:41.013 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:41.013 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:19:41.013 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:19:41.013 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:19:41.013 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:19:41.013 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:19:41.272 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:19:41.272 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:19:41.272 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:19:41.272 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:41.272 1+0 records in 00:19:41.272 1+0 records out 00:19:41.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279396 s, 14.7 MB/s 00:19:41.272 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.272 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:19:41.272 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.272 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:19:41.272 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:19:41.272 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:41.272 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:41.272 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:41.272 /dev/nbd1 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:41.531 1+0 records in 00:19:41.531 1+0 records out 00:19:41.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430726 s, 9.5 MB/s 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:41.531 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:41.790 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:41.790 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:41.790 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:41.790 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:41.790 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.790 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:41.790 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:41.790 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:41.790 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:41.790 06:51:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 95110 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 95110 ']' 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 95110 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95110 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:42.048 killing process with pid 95110 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95110' 00:19:42.048 06:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@965 -- # kill 95110 00:19:42.048 Received shutdown signal, test time was about 60.000000 seconds 00:19:42.048 00:19:42.048 Latency(us) 00:19:42.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.049 =================================================================================================================== 00:19:42.049 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:42.049 [2024-08-14 06:51:09.152983] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:42.049 06:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # wait 95110 00:19:42.049 [2024-08-14 06:51:09.206504] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:19:42.307 00:19:42.307 real 0m22.246s 00:19:42.307 user 0m31.640s 00:19:42.307 sys 0m3.842s 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.307 ************************************ 00:19:42.307 END TEST raid_rebuild_test 00:19:42.307 ************************************ 00:19:42.307 06:51:09 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:19:42.307 06:51:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:19:42.307 06:51:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:42.307 06:51:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.307 ************************************ 00:19:42.307 START TEST raid_rebuild_test_sb 00:19:42.307 ************************************ 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 true false true 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:19:42.307 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:19:42.308 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:19:42.308 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:19:42.308 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:19:42.308 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:19:42.308 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:19:42.308 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:19:42.308 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:42.308 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=95616 00:19:42.308 06:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 95616 /var/tmp/spdk-raid.sock 00:19:42.308 06:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 95616 ']' 00:19:42.308 06:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:42.308 06:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:42.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:42.308 06:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:42.308 06:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:42.308 06:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.567 [2024-08-14 06:51:09.626601] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:19:42.567 [2024-08-14 06:51:09.626794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95616 ] 00:19:42.567 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:42.567 Zero copy mechanism will not be used. 00:19:42.567 [2024-08-14 06:51:09.783551] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.826 [2024-08-14 06:51:09.835684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.826 [2024-08-14 06:51:09.879016] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:42.826 [2024-08-14 06:51:09.879065] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.395 06:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:43.395 06:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:19:43.395 06:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:43.395 06:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:43.655 BaseBdev1_malloc 00:19:43.655 06:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:43.914 [2024-08-14 06:51:10.965040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:43.914 [2024-08-14 06:51:10.965155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.914 [2024-08-14 06:51:10.965214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:19:43.914 [2024-08-14 06:51:10.965235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.914 [2024-08-14 06:51:10.967926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.914 [2024-08-14 06:51:10.967979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:43.914 BaseBdev1 00:19:43.914 06:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:43.914 06:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:44.174 BaseBdev2_malloc 00:19:44.174 06:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:44.433 [2024-08-14 06:51:11.505721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:44.433 [2024-08-14 06:51:11.505844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.433 [2024-08-14 06:51:11.505872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:44.433 [2024-08-14 06:51:11.505885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.433 [2024-08-14 06:51:11.508261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.433 [2024-08-14 06:51:11.508309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:44.433 BaseBdev2 00:19:44.433 06:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:44.433 06:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:44.693 BaseBdev3_malloc 00:19:44.693 06:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:44.952 [2024-08-14 06:51:12.003986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:44.952 [2024-08-14 06:51:12.004081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.952 [2024-08-14 06:51:12.004126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:44.952 [2024-08-14 06:51:12.004139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.952 [2024-08-14 06:51:12.006642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.952 [2024-08-14 06:51:12.006694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:44.952 BaseBdev3 00:19:44.952 06:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:44.952 06:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:45.212 BaseBdev4_malloc 00:19:45.212 06:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:45.472 [2024-08-14 06:51:12.476257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:45.472 [2024-08-14 06:51:12.476338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.472 [2024-08-14 06:51:12.476361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:45.472 [2024-08-14 06:51:12.476374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.472 [2024-08-14 06:51:12.478722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.472 [2024-08-14 06:51:12.478769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:45.472 BaseBdev4 00:19:45.472 06:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:45.732 spare_malloc 00:19:45.732 06:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:45.732 spare_delay 00:19:45.732 06:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:45.992 [2024-08-14 06:51:13.212001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:45.993 [2024-08-14 06:51:13.212093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.993 [2024-08-14 06:51:13.212117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:45.993 [2024-08-14 06:51:13.212128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.993 [2024-08-14 06:51:13.214603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.993 [2024-08-14 06:51:13.214657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:45.993 spare 00:19:45.993 06:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:19:46.252 [2024-08-14 06:51:13.483676] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.252 [2024-08-14 06:51:13.486007] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.252 [2024-08-14 06:51:13.486097] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:46.252 [2024-08-14 06:51:13.486154] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:46.252 [2024-08-14 06:51:13.486382] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:19:46.252 [2024-08-14 06:51:13.486413] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:46.252 [2024-08-14 06:51:13.486793] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:19:46.252 [2024-08-14 06:51:13.487004] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:19:46.252 [2024-08-14 06:51:13.487025] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:19:46.252 [2024-08-14 06:51:13.487228] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.511 06:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:46.511 06:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:46.511 06:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:46.511 06:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:46.511 06:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:46.511 06:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:46.511 06:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.511 06:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.511 06:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.511 06:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.511 06:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.511 06:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.511 06:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.511 "name": "raid_bdev1", 00:19:46.511 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:19:46.511 "strip_size_kb": 0, 00:19:46.511 "state": "online", 00:19:46.511 "raid_level": "raid1", 00:19:46.511 "superblock": true, 00:19:46.511 "num_base_bdevs": 4, 00:19:46.511 "num_base_bdevs_discovered": 4, 00:19:46.511 "num_base_bdevs_operational": 4, 00:19:46.511 "base_bdevs_list": [ 00:19:46.511 { 00:19:46.511 "name": "BaseBdev1", 00:19:46.511 "uuid": "a67fc239-f58a-5d14-8ff0-0b49e1e5ccf9", 00:19:46.511 "is_configured": true, 00:19:46.511 "data_offset": 2048, 00:19:46.511 "data_size": 63488 00:19:46.511 }, 00:19:46.511 { 00:19:46.511 "name": "BaseBdev2", 00:19:46.511 "uuid": "4e4b9a60-8c60-5aa9-a4dd-b4987fc6e7df", 00:19:46.511 "is_configured": true, 00:19:46.511 "data_offset": 2048, 00:19:46.511 "data_size": 63488 00:19:46.511 }, 00:19:46.511 { 00:19:46.511 "name": "BaseBdev3", 00:19:46.511 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:19:46.511 "is_configured": true, 00:19:46.511 "data_offset": 2048, 00:19:46.511 "data_size": 63488 00:19:46.511 }, 00:19:46.511 { 00:19:46.511 "name": "BaseBdev4", 00:19:46.511 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:19:46.511 "is_configured": true, 00:19:46.511 "data_offset": 2048, 00:19:46.511 "data_size": 63488 00:19:46.511 } 00:19:46.511 ] 00:19:46.511 }' 00:19:46.511 06:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.511 06:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.447 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:47.447 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:19:47.447 [2024-08-14 06:51:14.654487] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.447 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:19:47.447 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:47.447 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.705 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:19:47.705 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:19:47.705 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:19:47.705 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:19:47.705 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:47.705 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:47.705 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:47.705 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:47.705 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:47.705 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:47.705 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:47.705 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:47.705 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:47.705 06:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:47.964 [2024-08-14 06:51:15.166040] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:19:47.964 /dev/nbd0 00:19:47.964 06:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:47.964 06:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:47.964 06:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:19:47.964 06:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:19:47.964 06:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:19:47.964 06:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:19:47.964 06:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:19:48.224 06:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:19:48.224 06:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:19:48.224 06:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:19:48.224 06:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:48.224 1+0 records in 00:19:48.224 1+0 records out 00:19:48.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517986 s, 7.9 MB/s 00:19:48.224 06:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.224 06:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:19:48.224 06:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.224 06:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:19:48.224 06:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:19:48.224 06:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:48.224 06:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:48.224 06:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:19:48.224 06:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:19:48.224 06:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:54.800 63488+0 records in 00:19:54.800 63488+0 records out 00:19:54.800 32505856 bytes (33 MB, 31 MiB) copied, 6.73851 s, 4.8 MB/s 00:19:54.800 06:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:54.800 06:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:54.800 06:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:54.800 06:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:54.800 06:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:54.800 06:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:54.800 06:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:55.059 [2024-08-14 06:51:22.227011] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.059 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:55.059 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:55.059 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:55.059 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:55.060 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:55.060 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:55.060 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:55.060 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:55.060 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:55.318 [2024-08-14 06:51:22.527716] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:55.318 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:55.318 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:55.318 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:55.318 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:55.318 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:55.318 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:55.318 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:55.318 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:55.318 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:55.318 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:55.318 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.318 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.577 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.577 "name": "raid_bdev1", 00:19:55.577 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:19:55.577 "strip_size_kb": 0, 00:19:55.577 "state": "online", 00:19:55.577 "raid_level": "raid1", 00:19:55.577 "superblock": true, 00:19:55.577 "num_base_bdevs": 4, 00:19:55.577 "num_base_bdevs_discovered": 3, 00:19:55.577 "num_base_bdevs_operational": 3, 00:19:55.577 "base_bdevs_list": [ 00:19:55.577 { 00:19:55.577 "name": null, 00:19:55.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.577 "is_configured": false, 00:19:55.577 "data_offset": 2048, 00:19:55.577 "data_size": 63488 00:19:55.577 }, 00:19:55.577 { 00:19:55.577 "name": "BaseBdev2", 00:19:55.577 "uuid": "4e4b9a60-8c60-5aa9-a4dd-b4987fc6e7df", 00:19:55.577 "is_configured": true, 00:19:55.577 "data_offset": 2048, 00:19:55.577 "data_size": 63488 00:19:55.577 }, 00:19:55.577 { 00:19:55.577 "name": "BaseBdev3", 00:19:55.577 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:19:55.577 "is_configured": true, 00:19:55.577 "data_offset": 2048, 00:19:55.577 "data_size": 63488 00:19:55.577 }, 00:19:55.577 { 00:19:55.577 "name": "BaseBdev4", 00:19:55.577 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:19:55.577 "is_configured": true, 00:19:55.577 "data_offset": 2048, 00:19:55.577 "data_size": 63488 00:19:55.577 } 00:19:55.577 ] 00:19:55.577 }' 00:19:55.577 06:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.577 06:51:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.515 06:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:56.515 [2024-08-14 06:51:23.733972] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:56.515 [2024-08-14 06:51:23.737633] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:19:56.515 [2024-08-14 06:51:23.739838] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:56.515 06:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:57.892 06:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:57.892 06:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:57.892 06:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:57.892 06:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:57.892 06:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:57.892 06:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.892 06:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.892 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:57.892 "name": "raid_bdev1", 00:19:57.892 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:19:57.892 "strip_size_kb": 0, 00:19:57.892 "state": "online", 00:19:57.892 "raid_level": "raid1", 00:19:57.892 "superblock": true, 00:19:57.892 "num_base_bdevs": 4, 00:19:57.892 "num_base_bdevs_discovered": 4, 00:19:57.892 "num_base_bdevs_operational": 4, 00:19:57.892 "process": { 00:19:57.892 "type": "rebuild", 00:19:57.892 "target": "spare", 00:19:57.892 "progress": { 00:19:57.892 "blocks": 24576, 00:19:57.892 "percent": 38 00:19:57.892 } 00:19:57.892 }, 00:19:57.892 "base_bdevs_list": [ 00:19:57.892 { 00:19:57.892 "name": "spare", 00:19:57.892 "uuid": "78d27f1a-edaf-5418-bf52-ef04950d3ae6", 00:19:57.892 "is_configured": true, 00:19:57.892 "data_offset": 2048, 00:19:57.892 "data_size": 63488 00:19:57.892 }, 00:19:57.892 { 00:19:57.892 "name": "BaseBdev2", 00:19:57.892 "uuid": "4e4b9a60-8c60-5aa9-a4dd-b4987fc6e7df", 00:19:57.892 "is_configured": true, 00:19:57.892 "data_offset": 2048, 00:19:57.892 "data_size": 63488 00:19:57.892 }, 00:19:57.892 { 00:19:57.892 "name": "BaseBdev3", 00:19:57.892 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:19:57.892 "is_configured": true, 00:19:57.892 "data_offset": 2048, 00:19:57.892 "data_size": 63488 00:19:57.892 }, 00:19:57.892 { 00:19:57.892 "name": "BaseBdev4", 00:19:57.892 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:19:57.892 "is_configured": true, 00:19:57.892 "data_offset": 2048, 00:19:57.892 "data_size": 63488 00:19:57.892 } 00:19:57.892 ] 00:19:57.892 }' 00:19:57.892 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:57.892 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:57.892 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:57.892 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:57.892 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:58.151 [2024-08-14 06:51:25.371126] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:58.409 [2024-08-14 06:51:25.448808] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:58.409 [2024-08-14 06:51:25.448911] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.409 [2024-08-14 06:51:25.448933] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:58.410 [2024-08-14 06:51:25.448945] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:58.410 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:58.410 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:58.410 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:58.410 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:58.410 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:58.410 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:58.410 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:58.410 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:58.410 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:58.410 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:58.410 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.410 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.668 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:58.668 "name": "raid_bdev1", 00:19:58.668 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:19:58.668 "strip_size_kb": 0, 00:19:58.668 "state": "online", 00:19:58.668 "raid_level": "raid1", 00:19:58.668 "superblock": true, 00:19:58.668 "num_base_bdevs": 4, 00:19:58.668 "num_base_bdevs_discovered": 3, 00:19:58.668 "num_base_bdevs_operational": 3, 00:19:58.668 "base_bdevs_list": [ 00:19:58.668 { 00:19:58.668 "name": null, 00:19:58.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.668 "is_configured": false, 00:19:58.668 "data_offset": 2048, 00:19:58.668 "data_size": 63488 00:19:58.668 }, 00:19:58.668 { 00:19:58.668 "name": "BaseBdev2", 00:19:58.668 "uuid": "4e4b9a60-8c60-5aa9-a4dd-b4987fc6e7df", 00:19:58.668 "is_configured": true, 00:19:58.668 "data_offset": 2048, 00:19:58.668 "data_size": 63488 00:19:58.668 }, 00:19:58.668 { 00:19:58.668 "name": "BaseBdev3", 00:19:58.668 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:19:58.668 "is_configured": true, 00:19:58.668 "data_offset": 2048, 00:19:58.668 "data_size": 63488 00:19:58.668 }, 00:19:58.668 { 00:19:58.668 "name": "BaseBdev4", 00:19:58.668 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:19:58.668 "is_configured": true, 00:19:58.668 "data_offset": 2048, 00:19:58.668 "data_size": 63488 00:19:58.668 } 00:19:58.668 ] 00:19:58.668 }' 00:19:58.668 06:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:58.668 06:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.234 06:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:59.234 06:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:59.234 06:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:59.234 06:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:59.234 06:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:59.234 06:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.234 06:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.493 06:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:59.493 "name": "raid_bdev1", 00:19:59.493 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:19:59.493 "strip_size_kb": 0, 00:19:59.493 "state": "online", 00:19:59.493 "raid_level": "raid1", 00:19:59.493 "superblock": true, 00:19:59.493 "num_base_bdevs": 4, 00:19:59.493 "num_base_bdevs_discovered": 3, 00:19:59.493 "num_base_bdevs_operational": 3, 00:19:59.493 "base_bdevs_list": [ 00:19:59.493 { 00:19:59.493 "name": null, 00:19:59.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.493 "is_configured": false, 00:19:59.493 "data_offset": 2048, 00:19:59.493 "data_size": 63488 00:19:59.493 }, 00:19:59.493 { 00:19:59.493 "name": "BaseBdev2", 00:19:59.493 "uuid": "4e4b9a60-8c60-5aa9-a4dd-b4987fc6e7df", 00:19:59.493 "is_configured": true, 00:19:59.493 "data_offset": 2048, 00:19:59.493 "data_size": 63488 00:19:59.493 }, 00:19:59.493 { 00:19:59.493 "name": "BaseBdev3", 00:19:59.493 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:19:59.493 "is_configured": true, 00:19:59.493 "data_offset": 2048, 00:19:59.493 "data_size": 63488 00:19:59.493 }, 00:19:59.493 { 00:19:59.493 "name": "BaseBdev4", 00:19:59.493 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:19:59.493 "is_configured": true, 00:19:59.493 "data_offset": 2048, 00:19:59.493 "data_size": 63488 00:19:59.493 } 00:19:59.493 ] 00:19:59.493 }' 00:19:59.493 06:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:59.493 06:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:59.493 06:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:59.751 06:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:59.751 06:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:00.010 [2024-08-14 06:51:27.066631] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:00.010 [2024-08-14 06:51:27.070306] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e4f0 00:20:00.010 [2024-08-14 06:51:27.072454] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:00.010 06:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:20:00.959 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.959 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:00.959 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:00.959 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:00.959 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:00.959 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.959 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.217 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:01.217 "name": "raid_bdev1", 00:20:01.217 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:01.217 "strip_size_kb": 0, 00:20:01.217 "state": "online", 00:20:01.217 "raid_level": "raid1", 00:20:01.217 "superblock": true, 00:20:01.217 "num_base_bdevs": 4, 00:20:01.217 "num_base_bdevs_discovered": 4, 00:20:01.217 "num_base_bdevs_operational": 4, 00:20:01.217 "process": { 00:20:01.217 "type": "rebuild", 00:20:01.217 "target": "spare", 00:20:01.217 "progress": { 00:20:01.217 "blocks": 26624, 00:20:01.217 "percent": 41 00:20:01.217 } 00:20:01.217 }, 00:20:01.217 "base_bdevs_list": [ 00:20:01.217 { 00:20:01.217 "name": "spare", 00:20:01.217 "uuid": "78d27f1a-edaf-5418-bf52-ef04950d3ae6", 00:20:01.217 "is_configured": true, 00:20:01.217 "data_offset": 2048, 00:20:01.217 "data_size": 63488 00:20:01.217 }, 00:20:01.217 { 00:20:01.217 "name": "BaseBdev2", 00:20:01.217 "uuid": "4e4b9a60-8c60-5aa9-a4dd-b4987fc6e7df", 00:20:01.217 "is_configured": true, 00:20:01.217 "data_offset": 2048, 00:20:01.217 "data_size": 63488 00:20:01.217 }, 00:20:01.217 { 00:20:01.217 "name": "BaseBdev3", 00:20:01.217 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:01.217 "is_configured": true, 00:20:01.217 "data_offset": 2048, 00:20:01.217 "data_size": 63488 00:20:01.217 }, 00:20:01.217 { 00:20:01.217 "name": "BaseBdev4", 00:20:01.217 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:01.217 "is_configured": true, 00:20:01.217 "data_offset": 2048, 00:20:01.217 "data_size": 63488 00:20:01.217 } 00:20:01.217 ] 00:20:01.217 }' 00:20:01.217 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:01.217 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:01.217 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:01.476 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.476 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:20:01.476 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:20:01.476 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:20:01.476 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:20:01.476 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:20:01.476 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:20:01.476 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:01.734 [2024-08-14 06:51:28.755346] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:01.734 [2024-08-14 06:51:28.881156] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e4f0 00:20:01.734 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:20:01.734 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:20:01.734 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:01.734 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:01.734 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:01.734 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:01.734 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:01.734 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.734 06:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.992 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:01.992 "name": "raid_bdev1", 00:20:01.992 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:01.992 "strip_size_kb": 0, 00:20:01.992 "state": "online", 00:20:01.992 "raid_level": "raid1", 00:20:01.992 "superblock": true, 00:20:01.992 "num_base_bdevs": 4, 00:20:01.992 "num_base_bdevs_discovered": 3, 00:20:01.992 "num_base_bdevs_operational": 3, 00:20:01.992 "process": { 00:20:01.992 "type": "rebuild", 00:20:01.992 "target": "spare", 00:20:01.992 "progress": { 00:20:01.992 "blocks": 38912, 00:20:01.992 "percent": 61 00:20:01.992 } 00:20:01.992 }, 00:20:01.992 "base_bdevs_list": [ 00:20:01.992 { 00:20:01.992 "name": "spare", 00:20:01.992 "uuid": "78d27f1a-edaf-5418-bf52-ef04950d3ae6", 00:20:01.992 "is_configured": true, 00:20:01.992 "data_offset": 2048, 00:20:01.992 "data_size": 63488 00:20:01.992 }, 00:20:01.992 { 00:20:01.992 "name": null, 00:20:01.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.993 "is_configured": false, 00:20:01.993 "data_offset": 2048, 00:20:01.993 "data_size": 63488 00:20:01.993 }, 00:20:01.993 { 00:20:01.993 "name": "BaseBdev3", 00:20:01.993 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:01.993 "is_configured": true, 00:20:01.993 "data_offset": 2048, 00:20:01.993 "data_size": 63488 00:20:01.993 }, 00:20:01.993 { 00:20:01.993 "name": "BaseBdev4", 00:20:01.993 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:01.993 "is_configured": true, 00:20:01.993 "data_offset": 2048, 00:20:01.993 "data_size": 63488 00:20:01.993 } 00:20:01.993 ] 00:20:01.993 }' 00:20:01.993 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:02.252 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:02.252 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:02.252 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:02.252 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=848 00:20:02.252 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:20:02.252 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.252 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:02.252 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:02.252 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:02.252 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:02.252 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.252 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.511 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:02.511 "name": "raid_bdev1", 00:20:02.511 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:02.511 "strip_size_kb": 0, 00:20:02.511 "state": "online", 00:20:02.511 "raid_level": "raid1", 00:20:02.511 "superblock": true, 00:20:02.511 "num_base_bdevs": 4, 00:20:02.511 "num_base_bdevs_discovered": 3, 00:20:02.511 "num_base_bdevs_operational": 3, 00:20:02.511 "process": { 00:20:02.511 "type": "rebuild", 00:20:02.511 "target": "spare", 00:20:02.511 "progress": { 00:20:02.511 "blocks": 47104, 00:20:02.511 "percent": 74 00:20:02.511 } 00:20:02.511 }, 00:20:02.511 "base_bdevs_list": [ 00:20:02.511 { 00:20:02.511 "name": "spare", 00:20:02.511 "uuid": "78d27f1a-edaf-5418-bf52-ef04950d3ae6", 00:20:02.511 "is_configured": true, 00:20:02.511 "data_offset": 2048, 00:20:02.511 "data_size": 63488 00:20:02.511 }, 00:20:02.511 { 00:20:02.511 "name": null, 00:20:02.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.511 "is_configured": false, 00:20:02.511 "data_offset": 2048, 00:20:02.511 "data_size": 63488 00:20:02.511 }, 00:20:02.511 { 00:20:02.511 "name": "BaseBdev3", 00:20:02.511 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:02.511 "is_configured": true, 00:20:02.511 "data_offset": 2048, 00:20:02.511 "data_size": 63488 00:20:02.511 }, 00:20:02.511 { 00:20:02.511 "name": "BaseBdev4", 00:20:02.511 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:02.511 "is_configured": true, 00:20:02.511 "data_offset": 2048, 00:20:02.511 "data_size": 63488 00:20:02.511 } 00:20:02.511 ] 00:20:02.511 }' 00:20:02.511 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:02.511 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:02.511 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:02.511 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:02.511 06:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:20:03.079 [2024-08-14 06:51:30.288445] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:03.079 [2024-08-14 06:51:30.288537] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:03.079 [2024-08-14 06:51:30.288673] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.647 06:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:20:03.647 06:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:03.647 06:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:03.647 06:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:03.647 06:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:03.647 06:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:03.647 06:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.647 06:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.906 06:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:03.906 "name": "raid_bdev1", 00:20:03.906 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:03.906 "strip_size_kb": 0, 00:20:03.906 "state": "online", 00:20:03.906 "raid_level": "raid1", 00:20:03.906 "superblock": true, 00:20:03.906 "num_base_bdevs": 4, 00:20:03.906 "num_base_bdevs_discovered": 3, 00:20:03.906 "num_base_bdevs_operational": 3, 00:20:03.906 "base_bdevs_list": [ 00:20:03.906 { 00:20:03.906 "name": "spare", 00:20:03.906 "uuid": "78d27f1a-edaf-5418-bf52-ef04950d3ae6", 00:20:03.906 "is_configured": true, 00:20:03.906 "data_offset": 2048, 00:20:03.906 "data_size": 63488 00:20:03.906 }, 00:20:03.906 { 00:20:03.906 "name": null, 00:20:03.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.906 "is_configured": false, 00:20:03.906 "data_offset": 2048, 00:20:03.906 "data_size": 63488 00:20:03.906 }, 00:20:03.906 { 00:20:03.906 "name": "BaseBdev3", 00:20:03.906 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:03.906 "is_configured": true, 00:20:03.906 "data_offset": 2048, 00:20:03.906 "data_size": 63488 00:20:03.906 }, 00:20:03.906 { 00:20:03.906 "name": "BaseBdev4", 00:20:03.906 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:03.906 "is_configured": true, 00:20:03.906 "data_offset": 2048, 00:20:03.906 "data_size": 63488 00:20:03.906 } 00:20:03.906 ] 00:20:03.906 }' 00:20:03.906 06:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:03.906 06:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:03.906 06:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:03.906 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:20:03.906 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:20:03.906 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:03.906 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:03.906 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:03.906 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:03.906 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:03.906 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.906 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:04.166 "name": "raid_bdev1", 00:20:04.166 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:04.166 "strip_size_kb": 0, 00:20:04.166 "state": "online", 00:20:04.166 "raid_level": "raid1", 00:20:04.166 "superblock": true, 00:20:04.166 "num_base_bdevs": 4, 00:20:04.166 "num_base_bdevs_discovered": 3, 00:20:04.166 "num_base_bdevs_operational": 3, 00:20:04.166 "base_bdevs_list": [ 00:20:04.166 { 00:20:04.166 "name": "spare", 00:20:04.166 "uuid": "78d27f1a-edaf-5418-bf52-ef04950d3ae6", 00:20:04.166 "is_configured": true, 00:20:04.166 "data_offset": 2048, 00:20:04.166 "data_size": 63488 00:20:04.166 }, 00:20:04.166 { 00:20:04.166 "name": null, 00:20:04.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.166 "is_configured": false, 00:20:04.166 "data_offset": 2048, 00:20:04.166 "data_size": 63488 00:20:04.166 }, 00:20:04.166 { 00:20:04.166 "name": "BaseBdev3", 00:20:04.166 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:04.166 "is_configured": true, 00:20:04.166 "data_offset": 2048, 00:20:04.166 "data_size": 63488 00:20:04.166 }, 00:20:04.166 { 00:20:04.166 "name": "BaseBdev4", 00:20:04.166 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:04.166 "is_configured": true, 00:20:04.166 "data_offset": 2048, 00:20:04.166 "data_size": 63488 00:20:04.166 } 00:20:04.166 ] 00:20:04.166 }' 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.166 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.424 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.424 "name": "raid_bdev1", 00:20:04.424 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:04.424 "strip_size_kb": 0, 00:20:04.424 "state": "online", 00:20:04.424 "raid_level": "raid1", 00:20:04.424 "superblock": true, 00:20:04.424 "num_base_bdevs": 4, 00:20:04.424 "num_base_bdevs_discovered": 3, 00:20:04.424 "num_base_bdevs_operational": 3, 00:20:04.424 "base_bdevs_list": [ 00:20:04.424 { 00:20:04.424 "name": "spare", 00:20:04.424 "uuid": "78d27f1a-edaf-5418-bf52-ef04950d3ae6", 00:20:04.424 "is_configured": true, 00:20:04.424 "data_offset": 2048, 00:20:04.424 "data_size": 63488 00:20:04.424 }, 00:20:04.424 { 00:20:04.424 "name": null, 00:20:04.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.424 "is_configured": false, 00:20:04.424 "data_offset": 2048, 00:20:04.424 "data_size": 63488 00:20:04.424 }, 00:20:04.424 { 00:20:04.424 "name": "BaseBdev3", 00:20:04.424 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:04.424 "is_configured": true, 00:20:04.424 "data_offset": 2048, 00:20:04.424 "data_size": 63488 00:20:04.424 }, 00:20:04.424 { 00:20:04.424 "name": "BaseBdev4", 00:20:04.424 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:04.424 "is_configured": true, 00:20:04.424 "data_offset": 2048, 00:20:04.424 "data_size": 63488 00:20:04.424 } 00:20:04.424 ] 00:20:04.424 }' 00:20:04.424 06:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.424 06:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.992 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:05.250 [2024-08-14 06:51:32.293585] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.250 [2024-08-14 06:51:32.293630] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.250 [2024-08-14 06:51:32.293718] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.250 [2024-08-14 06:51:32.293824] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.250 [2024-08-14 06:51:32.293834] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:20:05.250 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:20:05.250 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:05.508 /dev/nbd0 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:05.508 1+0 records in 00:20:05.508 1+0 records out 00:20:05.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379801 s, 10.8 MB/s 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:05.508 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:05.766 /dev/nbd1 00:20:05.766 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:05.766 06:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:05.766 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:20:05.766 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:20:05.766 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:05.766 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:05.766 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:20:05.766 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:20:05.766 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:05.766 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:05.766 06:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:05.766 1+0 records in 00:20:05.766 1+0 records out 00:20:05.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268203 s, 15.3 MB/s 00:20:05.766 06:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.766 06:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:20:05.766 06:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.766 06:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:05.766 06:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:20:05.766 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:05.766 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:05.766 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:06.024 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:06.024 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:06.024 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:06.024 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:06.024 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:06.024 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:06.024 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:20:06.282 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:06.541 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:06.861 [2024-08-14 06:51:33.925945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:06.861 [2024-08-14 06:51:33.926022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.861 [2024-08-14 06:51:33.926046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:06.861 [2024-08-14 06:51:33.926055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.861 [2024-08-14 06:51:33.928239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.861 [2024-08-14 06:51:33.928272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:06.861 [2024-08-14 06:51:33.928368] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:06.861 [2024-08-14 06:51:33.928407] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:06.861 [2024-08-14 06:51:33.928544] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:06.861 [2024-08-14 06:51:33.928635] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:06.861 spare 00:20:06.861 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:06.861 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:06.861 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:06.861 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:06.861 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:06.861 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:06.861 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:06.861 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:06.861 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:06.861 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:06.861 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.861 06:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.861 [2024-08-14 06:51:34.028533] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:20:06.861 [2024-08-14 06:51:34.028581] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:06.861 [2024-08-14 06:51:34.028918] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:20:06.861 [2024-08-14 06:51:34.029102] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:20:06.861 [2024-08-14 06:51:34.029122] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:20:06.861 [2024-08-14 06:51:34.029286] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.120 06:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:07.120 "name": "raid_bdev1", 00:20:07.120 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:07.120 "strip_size_kb": 0, 00:20:07.120 "state": "online", 00:20:07.120 "raid_level": "raid1", 00:20:07.120 "superblock": true, 00:20:07.120 "num_base_bdevs": 4, 00:20:07.120 "num_base_bdevs_discovered": 3, 00:20:07.120 "num_base_bdevs_operational": 3, 00:20:07.120 "base_bdevs_list": [ 00:20:07.120 { 00:20:07.120 "name": "spare", 00:20:07.120 "uuid": "78d27f1a-edaf-5418-bf52-ef04950d3ae6", 00:20:07.120 "is_configured": true, 00:20:07.120 "data_offset": 2048, 00:20:07.120 "data_size": 63488 00:20:07.120 }, 00:20:07.120 { 00:20:07.120 "name": null, 00:20:07.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.120 "is_configured": false, 00:20:07.120 "data_offset": 2048, 00:20:07.120 "data_size": 63488 00:20:07.120 }, 00:20:07.120 { 00:20:07.120 "name": "BaseBdev3", 00:20:07.120 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:07.120 "is_configured": true, 00:20:07.120 "data_offset": 2048, 00:20:07.120 "data_size": 63488 00:20:07.120 }, 00:20:07.120 { 00:20:07.120 "name": "BaseBdev4", 00:20:07.120 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:07.120 "is_configured": true, 00:20:07.120 "data_offset": 2048, 00:20:07.120 "data_size": 63488 00:20:07.120 } 00:20:07.120 ] 00:20:07.120 }' 00:20:07.120 06:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:07.120 06:51:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.686 06:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:07.686 06:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:07.686 06:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:07.686 06:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:07.686 06:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:07.686 06:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.687 06:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.945 06:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:07.945 "name": "raid_bdev1", 00:20:07.945 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:07.945 "strip_size_kb": 0, 00:20:07.945 "state": "online", 00:20:07.945 "raid_level": "raid1", 00:20:07.945 "superblock": true, 00:20:07.945 "num_base_bdevs": 4, 00:20:07.945 "num_base_bdevs_discovered": 3, 00:20:07.945 "num_base_bdevs_operational": 3, 00:20:07.945 "base_bdevs_list": [ 00:20:07.945 { 00:20:07.945 "name": "spare", 00:20:07.945 "uuid": "78d27f1a-edaf-5418-bf52-ef04950d3ae6", 00:20:07.945 "is_configured": true, 00:20:07.945 "data_offset": 2048, 00:20:07.945 "data_size": 63488 00:20:07.945 }, 00:20:07.945 { 00:20:07.945 "name": null, 00:20:07.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.945 "is_configured": false, 00:20:07.945 "data_offset": 2048, 00:20:07.945 "data_size": 63488 00:20:07.945 }, 00:20:07.945 { 00:20:07.945 "name": "BaseBdev3", 00:20:07.945 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:07.945 "is_configured": true, 00:20:07.945 "data_offset": 2048, 00:20:07.945 "data_size": 63488 00:20:07.945 }, 00:20:07.945 { 00:20:07.945 "name": "BaseBdev4", 00:20:07.945 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:07.945 "is_configured": true, 00:20:07.945 "data_offset": 2048, 00:20:07.945 "data_size": 63488 00:20:07.945 } 00:20:07.945 ] 00:20:07.945 }' 00:20:07.945 06:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:07.945 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:07.945 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:07.945 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:07.945 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.945 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:08.204 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.204 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:08.462 [2024-08-14 06:51:35.579384] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:08.462 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:08.462 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:08.462 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:08.462 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:08.462 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:08.462 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:08.462 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:08.462 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:08.462 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:08.462 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:08.462 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.462 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.720 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:08.720 "name": "raid_bdev1", 00:20:08.720 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:08.720 "strip_size_kb": 0, 00:20:08.720 "state": "online", 00:20:08.720 "raid_level": "raid1", 00:20:08.720 "superblock": true, 00:20:08.720 "num_base_bdevs": 4, 00:20:08.720 "num_base_bdevs_discovered": 2, 00:20:08.720 "num_base_bdevs_operational": 2, 00:20:08.720 "base_bdevs_list": [ 00:20:08.720 { 00:20:08.720 "name": null, 00:20:08.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.720 "is_configured": false, 00:20:08.720 "data_offset": 2048, 00:20:08.720 "data_size": 63488 00:20:08.720 }, 00:20:08.720 { 00:20:08.720 "name": null, 00:20:08.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.720 "is_configured": false, 00:20:08.720 "data_offset": 2048, 00:20:08.720 "data_size": 63488 00:20:08.720 }, 00:20:08.720 { 00:20:08.720 "name": "BaseBdev3", 00:20:08.720 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:08.720 "is_configured": true, 00:20:08.720 "data_offset": 2048, 00:20:08.720 "data_size": 63488 00:20:08.720 }, 00:20:08.720 { 00:20:08.720 "name": "BaseBdev4", 00:20:08.720 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:08.720 "is_configured": true, 00:20:08.720 "data_offset": 2048, 00:20:08.720 "data_size": 63488 00:20:08.720 } 00:20:08.720 ] 00:20:08.720 }' 00:20:08.720 06:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:08.720 06:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.287 06:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:09.546 [2024-08-14 06:51:36.734013] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:09.546 [2024-08-14 06:51:36.734275] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:09.546 [2024-08-14 06:51:36.734295] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:09.546 [2024-08-14 06:51:36.734365] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:09.546 [2024-08-14 06:51:36.737879] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caebd0 00:20:09.546 [2024-08-14 06:51:36.740109] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:09.546 06:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:20:10.921 06:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:10.921 06:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:10.921 06:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:10.921 06:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:10.921 06:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:10.921 06:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.921 06:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.921 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:10.921 "name": "raid_bdev1", 00:20:10.921 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:10.921 "strip_size_kb": 0, 00:20:10.921 "state": "online", 00:20:10.921 "raid_level": "raid1", 00:20:10.921 "superblock": true, 00:20:10.921 "num_base_bdevs": 4, 00:20:10.921 "num_base_bdevs_discovered": 3, 00:20:10.921 "num_base_bdevs_operational": 3, 00:20:10.921 "process": { 00:20:10.921 "type": "rebuild", 00:20:10.921 "target": "spare", 00:20:10.921 "progress": { 00:20:10.921 "blocks": 24576, 00:20:10.921 "percent": 38 00:20:10.921 } 00:20:10.921 }, 00:20:10.921 "base_bdevs_list": [ 00:20:10.921 { 00:20:10.921 "name": "spare", 00:20:10.921 "uuid": "78d27f1a-edaf-5418-bf52-ef04950d3ae6", 00:20:10.921 "is_configured": true, 00:20:10.921 "data_offset": 2048, 00:20:10.921 "data_size": 63488 00:20:10.921 }, 00:20:10.921 { 00:20:10.921 "name": null, 00:20:10.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.921 "is_configured": false, 00:20:10.921 "data_offset": 2048, 00:20:10.921 "data_size": 63488 00:20:10.921 }, 00:20:10.921 { 00:20:10.921 "name": "BaseBdev3", 00:20:10.921 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:10.921 "is_configured": true, 00:20:10.921 "data_offset": 2048, 00:20:10.921 "data_size": 63488 00:20:10.921 }, 00:20:10.921 { 00:20:10.921 "name": "BaseBdev4", 00:20:10.921 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:10.921 "is_configured": true, 00:20:10.921 "data_offset": 2048, 00:20:10.921 "data_size": 63488 00:20:10.921 } 00:20:10.921 ] 00:20:10.921 }' 00:20:10.921 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:10.921 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:10.921 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:10.921 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:10.921 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:11.179 [2024-08-14 06:51:38.370112] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:11.438 [2024-08-14 06:51:38.448201] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:11.438 [2024-08-14 06:51:38.448320] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.438 [2024-08-14 06:51:38.448348] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:11.438 [2024-08-14 06:51:38.448357] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:11.438 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:11.438 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:11.438 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:11.438 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:11.438 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:11.438 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:11.438 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.438 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.438 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.438 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.438 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.438 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.696 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:11.696 "name": "raid_bdev1", 00:20:11.696 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:11.696 "strip_size_kb": 0, 00:20:11.696 "state": "online", 00:20:11.696 "raid_level": "raid1", 00:20:11.696 "superblock": true, 00:20:11.696 "num_base_bdevs": 4, 00:20:11.696 "num_base_bdevs_discovered": 2, 00:20:11.696 "num_base_bdevs_operational": 2, 00:20:11.696 "base_bdevs_list": [ 00:20:11.696 { 00:20:11.696 "name": null, 00:20:11.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.696 "is_configured": false, 00:20:11.696 "data_offset": 2048, 00:20:11.696 "data_size": 63488 00:20:11.696 }, 00:20:11.696 { 00:20:11.696 "name": null, 00:20:11.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.696 "is_configured": false, 00:20:11.696 "data_offset": 2048, 00:20:11.696 "data_size": 63488 00:20:11.696 }, 00:20:11.696 { 00:20:11.696 "name": "BaseBdev3", 00:20:11.696 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:11.696 "is_configured": true, 00:20:11.696 "data_offset": 2048, 00:20:11.696 "data_size": 63488 00:20:11.696 }, 00:20:11.696 { 00:20:11.696 "name": "BaseBdev4", 00:20:11.696 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:11.696 "is_configured": true, 00:20:11.696 "data_offset": 2048, 00:20:11.696 "data_size": 63488 00:20:11.696 } 00:20:11.696 ] 00:20:11.696 }' 00:20:11.696 06:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:11.696 06:51:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.261 06:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:12.519 [2024-08-14 06:51:39.606820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:12.519 [2024-08-14 06:51:39.606938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:12.519 [2024-08-14 06:51:39.606976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:20:12.519 [2024-08-14 06:51:39.606988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:12.519 [2024-08-14 06:51:39.607570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:12.519 [2024-08-14 06:51:39.607606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:12.519 [2024-08-14 06:51:39.607711] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:12.519 [2024-08-14 06:51:39.607729] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:12.519 [2024-08-14 06:51:39.607747] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:12.519 [2024-08-14 06:51:39.607776] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:12.519 [2024-08-14 06:51:39.611335] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:20:12.519 spare 00:20:12.519 [2024-08-14 06:51:39.613604] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:12.519 06:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:20:13.455 06:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.455 06:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:13.455 06:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:13.455 06:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:13.455 06:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:13.455 06:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.455 06:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.720 06:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:13.720 "name": "raid_bdev1", 00:20:13.720 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:13.720 "strip_size_kb": 0, 00:20:13.720 "state": "online", 00:20:13.720 "raid_level": "raid1", 00:20:13.720 "superblock": true, 00:20:13.720 "num_base_bdevs": 4, 00:20:13.720 "num_base_bdevs_discovered": 3, 00:20:13.720 "num_base_bdevs_operational": 3, 00:20:13.720 "process": { 00:20:13.720 "type": "rebuild", 00:20:13.720 "target": "spare", 00:20:13.720 "progress": { 00:20:13.720 "blocks": 24576, 00:20:13.720 "percent": 38 00:20:13.720 } 00:20:13.720 }, 00:20:13.720 "base_bdevs_list": [ 00:20:13.720 { 00:20:13.720 "name": "spare", 00:20:13.720 "uuid": "78d27f1a-edaf-5418-bf52-ef04950d3ae6", 00:20:13.720 "is_configured": true, 00:20:13.720 "data_offset": 2048, 00:20:13.720 "data_size": 63488 00:20:13.720 }, 00:20:13.720 { 00:20:13.720 "name": null, 00:20:13.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.720 "is_configured": false, 00:20:13.720 "data_offset": 2048, 00:20:13.720 "data_size": 63488 00:20:13.720 }, 00:20:13.720 { 00:20:13.720 "name": "BaseBdev3", 00:20:13.720 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:13.720 "is_configured": true, 00:20:13.720 "data_offset": 2048, 00:20:13.720 "data_size": 63488 00:20:13.720 }, 00:20:13.720 { 00:20:13.720 "name": "BaseBdev4", 00:20:13.720 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:13.720 "is_configured": true, 00:20:13.720 "data_offset": 2048, 00:20:13.720 "data_size": 63488 00:20:13.720 } 00:20:13.720 ] 00:20:13.720 }' 00:20:13.720 06:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:13.720 06:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.720 06:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:13.983 06:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.983 06:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:13.983 [2024-08-14 06:51:41.230043] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:14.242 [2024-08-14 06:51:41.321205] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:14.242 [2024-08-14 06:51:41.321310] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.242 [2024-08-14 06:51:41.321330] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:14.242 [2024-08-14 06:51:41.321341] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:14.242 06:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:14.242 06:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:14.242 06:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:14.242 06:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:14.242 06:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:14.242 06:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:14.242 06:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:14.242 06:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:14.242 06:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:14.242 06:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:14.242 06:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.242 06:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.500 06:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:14.500 "name": "raid_bdev1", 00:20:14.500 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:14.500 "strip_size_kb": 0, 00:20:14.500 "state": "online", 00:20:14.500 "raid_level": "raid1", 00:20:14.500 "superblock": true, 00:20:14.500 "num_base_bdevs": 4, 00:20:14.500 "num_base_bdevs_discovered": 2, 00:20:14.500 "num_base_bdevs_operational": 2, 00:20:14.500 "base_bdevs_list": [ 00:20:14.500 { 00:20:14.500 "name": null, 00:20:14.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.500 "is_configured": false, 00:20:14.500 "data_offset": 2048, 00:20:14.500 "data_size": 63488 00:20:14.500 }, 00:20:14.500 { 00:20:14.500 "name": null, 00:20:14.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.500 "is_configured": false, 00:20:14.500 "data_offset": 2048, 00:20:14.500 "data_size": 63488 00:20:14.500 }, 00:20:14.500 { 00:20:14.500 "name": "BaseBdev3", 00:20:14.500 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:14.500 "is_configured": true, 00:20:14.500 "data_offset": 2048, 00:20:14.500 "data_size": 63488 00:20:14.500 }, 00:20:14.500 { 00:20:14.500 "name": "BaseBdev4", 00:20:14.500 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:14.500 "is_configured": true, 00:20:14.500 "data_offset": 2048, 00:20:14.500 "data_size": 63488 00:20:14.500 } 00:20:14.500 ] 00:20:14.500 }' 00:20:14.501 06:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:14.501 06:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.068 06:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:15.068 06:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:15.068 06:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:15.068 06:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:15.068 06:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:15.068 06:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.068 06:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.326 06:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:15.326 "name": "raid_bdev1", 00:20:15.326 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:15.326 "strip_size_kb": 0, 00:20:15.326 "state": "online", 00:20:15.326 "raid_level": "raid1", 00:20:15.326 "superblock": true, 00:20:15.326 "num_base_bdevs": 4, 00:20:15.326 "num_base_bdevs_discovered": 2, 00:20:15.326 "num_base_bdevs_operational": 2, 00:20:15.326 "base_bdevs_list": [ 00:20:15.326 { 00:20:15.326 "name": null, 00:20:15.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.326 "is_configured": false, 00:20:15.327 "data_offset": 2048, 00:20:15.327 "data_size": 63488 00:20:15.327 }, 00:20:15.327 { 00:20:15.327 "name": null, 00:20:15.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.327 "is_configured": false, 00:20:15.327 "data_offset": 2048, 00:20:15.327 "data_size": 63488 00:20:15.327 }, 00:20:15.327 { 00:20:15.327 "name": "BaseBdev3", 00:20:15.327 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:15.327 "is_configured": true, 00:20:15.327 "data_offset": 2048, 00:20:15.327 "data_size": 63488 00:20:15.327 }, 00:20:15.327 { 00:20:15.327 "name": "BaseBdev4", 00:20:15.327 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:15.327 "is_configured": true, 00:20:15.327 "data_offset": 2048, 00:20:15.327 "data_size": 63488 00:20:15.327 } 00:20:15.327 ] 00:20:15.327 }' 00:20:15.327 06:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:15.327 06:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:15.327 06:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:15.327 06:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:15.327 06:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:15.586 06:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:15.846 [2024-08-14 06:51:42.998783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:15.846 [2024-08-14 06:51:42.998865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.846 [2024-08-14 06:51:42.998888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:15.846 [2024-08-14 06:51:42.998901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.846 [2024-08-14 06:51:42.999382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.846 [2024-08-14 06:51:42.999407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:15.846 [2024-08-14 06:51:42.999495] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:15.846 [2024-08-14 06:51:42.999519] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:15.846 [2024-08-14 06:51:42.999528] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:15.846 BaseBdev1 00:20:15.846 06:51:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:20:16.783 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:16.783 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:16.783 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:16.783 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:16.783 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:16.783 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:16.783 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.783 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.783 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.783 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.783 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.783 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.042 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:17.042 "name": "raid_bdev1", 00:20:17.042 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:17.042 "strip_size_kb": 0, 00:20:17.042 "state": "online", 00:20:17.042 "raid_level": "raid1", 00:20:17.042 "superblock": true, 00:20:17.042 "num_base_bdevs": 4, 00:20:17.043 "num_base_bdevs_discovered": 2, 00:20:17.043 "num_base_bdevs_operational": 2, 00:20:17.043 "base_bdevs_list": [ 00:20:17.043 { 00:20:17.043 "name": null, 00:20:17.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.043 "is_configured": false, 00:20:17.043 "data_offset": 2048, 00:20:17.043 "data_size": 63488 00:20:17.043 }, 00:20:17.043 { 00:20:17.043 "name": null, 00:20:17.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.043 "is_configured": false, 00:20:17.043 "data_offset": 2048, 00:20:17.043 "data_size": 63488 00:20:17.043 }, 00:20:17.043 { 00:20:17.043 "name": "BaseBdev3", 00:20:17.043 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:17.043 "is_configured": true, 00:20:17.043 "data_offset": 2048, 00:20:17.043 "data_size": 63488 00:20:17.043 }, 00:20:17.043 { 00:20:17.043 "name": "BaseBdev4", 00:20:17.043 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:17.043 "is_configured": true, 00:20:17.043 "data_offset": 2048, 00:20:17.043 "data_size": 63488 00:20:17.043 } 00:20:17.043 ] 00:20:17.043 }' 00:20:17.043 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:17.043 06:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.611 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:17.611 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:17.611 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:17.611 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:17.611 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:17.870 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.870 06:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.870 06:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:17.870 "name": "raid_bdev1", 00:20:17.870 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:17.870 "strip_size_kb": 0, 00:20:17.870 "state": "online", 00:20:17.870 "raid_level": "raid1", 00:20:17.870 "superblock": true, 00:20:17.870 "num_base_bdevs": 4, 00:20:17.870 "num_base_bdevs_discovered": 2, 00:20:17.870 "num_base_bdevs_operational": 2, 00:20:17.870 "base_bdevs_list": [ 00:20:17.870 { 00:20:17.870 "name": null, 00:20:17.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.870 "is_configured": false, 00:20:17.870 "data_offset": 2048, 00:20:17.870 "data_size": 63488 00:20:17.870 }, 00:20:17.870 { 00:20:17.870 "name": null, 00:20:17.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.870 "is_configured": false, 00:20:17.870 "data_offset": 2048, 00:20:17.870 "data_size": 63488 00:20:17.870 }, 00:20:17.870 { 00:20:17.870 "name": "BaseBdev3", 00:20:17.870 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:17.870 "is_configured": true, 00:20:17.870 "data_offset": 2048, 00:20:17.870 "data_size": 63488 00:20:17.870 }, 00:20:17.870 { 00:20:17.870 "name": "BaseBdev4", 00:20:17.870 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:17.870 "is_configured": true, 00:20:17.870 "data_offset": 2048, 00:20:17.870 "data_size": 63488 00:20:17.870 } 00:20:17.870 ] 00:20:17.870 }' 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@646 -- # local es=0 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:18.153 06:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:18.432 [2024-08-14 06:51:45.443790] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:18.432 [2024-08-14 06:51:45.443968] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:18.432 [2024-08-14 06:51:45.443993] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:18.432 request: 00:20:18.432 { 00:20:18.432 "base_bdev": "BaseBdev1", 00:20:18.432 "raid_bdev": "raid_bdev1", 00:20:18.432 "method": "bdev_raid_add_base_bdev", 00:20:18.432 "req_id": 1 00:20:18.432 } 00:20:18.432 Got JSON-RPC error response 00:20:18.432 response: 00:20:18.432 { 00:20:18.432 "code": -22, 00:20:18.432 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:18.432 } 00:20:18.432 06:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@649 -- # es=1 00:20:18.432 06:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:20:18.432 06:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:20:18.432 06:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:20:18.432 06:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:20:19.363 06:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:19.363 06:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:19.363 06:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:19.364 06:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:19.364 06:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:19.364 06:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:19.364 06:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:19.364 06:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:19.364 06:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:19.364 06:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:19.364 06:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.364 06:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.622 06:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:19.622 "name": "raid_bdev1", 00:20:19.622 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:19.622 "strip_size_kb": 0, 00:20:19.622 "state": "online", 00:20:19.622 "raid_level": "raid1", 00:20:19.622 "superblock": true, 00:20:19.622 "num_base_bdevs": 4, 00:20:19.622 "num_base_bdevs_discovered": 2, 00:20:19.622 "num_base_bdevs_operational": 2, 00:20:19.622 "base_bdevs_list": [ 00:20:19.622 { 00:20:19.622 "name": null, 00:20:19.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.622 "is_configured": false, 00:20:19.622 "data_offset": 2048, 00:20:19.622 "data_size": 63488 00:20:19.622 }, 00:20:19.622 { 00:20:19.622 "name": null, 00:20:19.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.622 "is_configured": false, 00:20:19.622 "data_offset": 2048, 00:20:19.622 "data_size": 63488 00:20:19.622 }, 00:20:19.622 { 00:20:19.622 "name": "BaseBdev3", 00:20:19.622 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:19.622 "is_configured": true, 00:20:19.622 "data_offset": 2048, 00:20:19.622 "data_size": 63488 00:20:19.622 }, 00:20:19.622 { 00:20:19.622 "name": "BaseBdev4", 00:20:19.622 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:19.622 "is_configured": true, 00:20:19.622 "data_offset": 2048, 00:20:19.622 "data_size": 63488 00:20:19.622 } 00:20:19.622 ] 00:20:19.622 }' 00:20:19.622 06:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:19.622 06:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.189 06:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:20.189 06:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:20.189 06:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:20.189 06:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:20.189 06:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:20.189 06:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.189 06:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.447 06:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:20.447 "name": "raid_bdev1", 00:20:20.447 "uuid": "1e81db2d-4c9b-41de-aca3-d795bd7fbdbc", 00:20:20.447 "strip_size_kb": 0, 00:20:20.447 "state": "online", 00:20:20.447 "raid_level": "raid1", 00:20:20.447 "superblock": true, 00:20:20.447 "num_base_bdevs": 4, 00:20:20.447 "num_base_bdevs_discovered": 2, 00:20:20.447 "num_base_bdevs_operational": 2, 00:20:20.447 "base_bdevs_list": [ 00:20:20.447 { 00:20:20.447 "name": null, 00:20:20.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.447 "is_configured": false, 00:20:20.447 "data_offset": 2048, 00:20:20.447 "data_size": 63488 00:20:20.447 }, 00:20:20.447 { 00:20:20.447 "name": null, 00:20:20.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.447 "is_configured": false, 00:20:20.447 "data_offset": 2048, 00:20:20.447 "data_size": 63488 00:20:20.447 }, 00:20:20.447 { 00:20:20.447 "name": "BaseBdev3", 00:20:20.447 "uuid": "d8ec162d-55a8-516a-b503-a7964800fa17", 00:20:20.447 "is_configured": true, 00:20:20.447 "data_offset": 2048, 00:20:20.447 "data_size": 63488 00:20:20.447 }, 00:20:20.447 { 00:20:20.447 "name": "BaseBdev4", 00:20:20.447 "uuid": "bedca3ac-3cbe-52dc-b7a4-829d0f1e8f00", 00:20:20.447 "is_configured": true, 00:20:20.447 "data_offset": 2048, 00:20:20.447 "data_size": 63488 00:20:20.447 } 00:20:20.447 ] 00:20:20.447 }' 00:20:20.447 06:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:20.447 06:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:20.447 06:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:20.447 06:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:20.447 06:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 95616 00:20:20.447 06:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 95616 ']' 00:20:20.447 06:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 95616 00:20:20.447 06:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:20:20.447 06:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:20.447 06:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95616 00:20:20.447 06:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:20.447 killing process with pid 95616 00:20:20.447 Received shutdown signal, test time was about 60.000000 seconds 00:20:20.447 00:20:20.447 Latency(us) 00:20:20.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.447 =================================================================================================================== 00:20:20.447 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:20.448 06:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:20.448 06:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95616' 00:20:20.448 06:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 95616 00:20:20.448 [2024-08-14 06:51:47.676835] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:20.448 [2024-08-14 06:51:47.676976] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:20.448 06:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 95616 00:20:20.448 [2024-08-14 06:51:47.677050] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:20.448 [2024-08-14 06:51:47.677063] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:20:20.705 [2024-08-14 06:51:47.730710] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:20.963 ************************************ 00:20:20.963 END TEST raid_rebuild_test_sb 00:20:20.963 ************************************ 00:20:20.963 06:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:20:20.963 00:20:20.963 real 0m38.447s 00:20:20.963 user 0m56.908s 00:20:20.964 sys 0m5.814s 00:20:20.964 06:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:20.964 06:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.964 06:51:48 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:20:20.964 06:51:48 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:20:20.964 06:51:48 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:20.964 06:51:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:20.964 ************************************ 00:20:20.964 START TEST raid_rebuild_test_io 00:20:20.964 ************************************ 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 false true true 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=96520 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 96520 /var/tmp/spdk-raid.sock 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@827 -- # '[' -z 96520 ']' 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:20.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:20.964 06:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:20.964 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:20.964 Zero copy mechanism will not be used. 00:20:20.964 [2024-08-14 06:51:48.141096] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:20:20.964 [2024-08-14 06:51:48.141269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96520 ] 00:20:21.222 [2024-08-14 06:51:48.288234] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.222 [2024-08-14 06:51:48.342044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.222 [2024-08-14 06:51:48.386615] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.222 [2024-08-14 06:51:48.386660] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.788 06:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:21.788 06:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # return 0 00:20:21.788 06:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:20:21.788 06:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:22.051 BaseBdev1_malloc 00:20:22.051 06:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:22.309 [2024-08-14 06:51:49.436789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:22.309 [2024-08-14 06:51:49.436887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.309 [2024-08-14 06:51:49.436915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:20:22.309 [2024-08-14 06:51:49.436957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.309 [2024-08-14 06:51:49.439503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.309 [2024-08-14 06:51:49.439556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:22.309 BaseBdev1 00:20:22.309 06:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:20:22.309 06:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:22.567 BaseBdev2_malloc 00:20:22.567 06:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:22.825 [2024-08-14 06:51:49.916988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:22.825 [2024-08-14 06:51:49.917104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.825 [2024-08-14 06:51:49.917134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:22.825 [2024-08-14 06:51:49.917147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.825 [2024-08-14 06:51:49.919739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.825 [2024-08-14 06:51:49.919797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:22.825 BaseBdev2 00:20:22.825 06:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:20:22.825 06:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:23.082 BaseBdev3_malloc 00:20:23.082 06:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:23.339 [2024-08-14 06:51:50.445298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:23.339 [2024-08-14 06:51:50.445392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.339 [2024-08-14 06:51:50.445421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:23.339 [2024-08-14 06:51:50.445433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.339 [2024-08-14 06:51:50.447973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.339 [2024-08-14 06:51:50.448027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:23.339 BaseBdev3 00:20:23.339 06:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:20:23.339 06:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:23.596 BaseBdev4_malloc 00:20:23.597 06:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:23.855 [2024-08-14 06:51:50.938301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:23.855 [2024-08-14 06:51:50.938392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.855 [2024-08-14 06:51:50.938419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:23.855 [2024-08-14 06:51:50.938435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.855 [2024-08-14 06:51:50.940878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.855 [2024-08-14 06:51:50.940931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:23.855 BaseBdev4 00:20:23.855 06:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:24.113 spare_malloc 00:20:24.113 06:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:24.371 spare_delay 00:20:24.372 06:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:24.630 [2024-08-14 06:51:51.634595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:24.630 [2024-08-14 06:51:51.634687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.630 [2024-08-14 06:51:51.634713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:24.630 [2024-08-14 06:51:51.634726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.630 [2024-08-14 06:51:51.637125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.630 spare 00:20:24.630 [2024-08-14 06:51:51.637236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:24.630 06:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:24.630 [2024-08-14 06:51:51.866333] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:24.630 [2024-08-14 06:51:51.868537] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:24.630 [2024-08-14 06:51:51.868614] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:24.630 [2024-08-14 06:51:51.868667] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:24.630 [2024-08-14 06:51:51.868772] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:20:24.630 [2024-08-14 06:51:51.868787] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:24.630 [2024-08-14 06:51:51.869143] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:20:24.630 [2024-08-14 06:51:51.869341] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:20:24.630 [2024-08-14 06:51:51.869354] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:20:24.630 [2024-08-14 06:51:51.869509] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.888 06:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:24.888 06:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:24.888 06:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:24.888 06:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:24.888 06:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:24.888 06:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:24.888 06:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:24.888 06:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:24.888 06:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:24.888 06:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.888 06:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.888 06:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.888 06:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.888 "name": "raid_bdev1", 00:20:24.888 "uuid": "3a3cc7b5-c6bf-4d91-bbfc-2308ee04554c", 00:20:24.888 "strip_size_kb": 0, 00:20:24.888 "state": "online", 00:20:24.888 "raid_level": "raid1", 00:20:24.888 "superblock": false, 00:20:24.888 "num_base_bdevs": 4, 00:20:24.888 "num_base_bdevs_discovered": 4, 00:20:24.888 "num_base_bdevs_operational": 4, 00:20:24.888 "base_bdevs_list": [ 00:20:24.888 { 00:20:24.888 "name": "BaseBdev1", 00:20:24.888 "uuid": "837c869e-084d-584f-92d9-587b21593a1c", 00:20:24.888 "is_configured": true, 00:20:24.888 "data_offset": 0, 00:20:24.888 "data_size": 65536 00:20:24.888 }, 00:20:24.888 { 00:20:24.888 "name": "BaseBdev2", 00:20:24.888 "uuid": "4faee53c-0fd3-5934-93ff-69b058c3103b", 00:20:24.888 "is_configured": true, 00:20:24.888 "data_offset": 0, 00:20:24.888 "data_size": 65536 00:20:24.888 }, 00:20:24.888 { 00:20:24.888 "name": "BaseBdev3", 00:20:24.888 "uuid": "de646eb4-5d9b-5616-b96a-ecfab9ceba92", 00:20:24.888 "is_configured": true, 00:20:24.888 "data_offset": 0, 00:20:24.888 "data_size": 65536 00:20:24.888 }, 00:20:24.888 { 00:20:24.888 "name": "BaseBdev4", 00:20:24.888 "uuid": "eca8c179-d885-5c80-b3ee-93f0bec4a7bb", 00:20:24.888 "is_configured": true, 00:20:24.888 "data_offset": 0, 00:20:24.888 "data_size": 65536 00:20:24.888 } 00:20:24.888 ] 00:20:24.888 }' 00:20:24.888 06:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.888 06:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:25.823 06:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:25.823 06:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:20:25.823 [2024-08-14 06:51:52.960895] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.823 06:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:20:25.823 06:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:25.823 06:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.081 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:20:26.081 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:20:26.082 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:26.082 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:26.082 [2024-08-14 06:51:53.330076] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:20:26.082 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:26.082 Zero copy mechanism will not be used. 00:20:26.082 Running I/O for 60 seconds... 00:20:26.340 [2024-08-14 06:51:53.472696] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:26.340 [2024-08-14 06:51:53.486103] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:20:26.340 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:26.340 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:26.340 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:26.340 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:26.340 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:26.340 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:26.340 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:26.340 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:26.340 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:26.340 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:26.340 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.341 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.599 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:26.599 "name": "raid_bdev1", 00:20:26.599 "uuid": "3a3cc7b5-c6bf-4d91-bbfc-2308ee04554c", 00:20:26.599 "strip_size_kb": 0, 00:20:26.599 "state": "online", 00:20:26.599 "raid_level": "raid1", 00:20:26.599 "superblock": false, 00:20:26.599 "num_base_bdevs": 4, 00:20:26.599 "num_base_bdevs_discovered": 3, 00:20:26.599 "num_base_bdevs_operational": 3, 00:20:26.599 "base_bdevs_list": [ 00:20:26.599 { 00:20:26.599 "name": null, 00:20:26.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.599 "is_configured": false, 00:20:26.599 "data_offset": 0, 00:20:26.599 "data_size": 65536 00:20:26.599 }, 00:20:26.599 { 00:20:26.599 "name": "BaseBdev2", 00:20:26.599 "uuid": "4faee53c-0fd3-5934-93ff-69b058c3103b", 00:20:26.599 "is_configured": true, 00:20:26.599 "data_offset": 0, 00:20:26.599 "data_size": 65536 00:20:26.599 }, 00:20:26.599 { 00:20:26.599 "name": "BaseBdev3", 00:20:26.599 "uuid": "de646eb4-5d9b-5616-b96a-ecfab9ceba92", 00:20:26.599 "is_configured": true, 00:20:26.599 "data_offset": 0, 00:20:26.599 "data_size": 65536 00:20:26.599 }, 00:20:26.599 { 00:20:26.599 "name": "BaseBdev4", 00:20:26.599 "uuid": "eca8c179-d885-5c80-b3ee-93f0bec4a7bb", 00:20:26.599 "is_configured": true, 00:20:26.599 "data_offset": 0, 00:20:26.599 "data_size": 65536 00:20:26.599 } 00:20:26.599 ] 00:20:26.599 }' 00:20:26.599 06:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:26.599 06:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.166 06:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:27.425 [2024-08-14 06:51:54.545259] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:27.425 06:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:27.425 [2024-08-14 06:51:54.616573] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:20:27.425 [2024-08-14 06:51:54.618810] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:27.683 [2024-08-14 06:51:54.728684] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:27.683 [2024-08-14 06:51:54.729280] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:27.683 [2024-08-14 06:51:54.845735] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:27.683 [2024-08-14 06:51:54.846199] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:27.943 [2024-08-14 06:51:55.089538] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:28.202 [2024-08-14 06:51:55.305468] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:28.202 [2024-08-14 06:51:55.306350] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:28.461 06:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.461 06:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:28.461 06:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:28.461 06:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:28.461 06:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:28.461 06:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.461 06:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.461 [2024-08-14 06:51:55.650187] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:28.461 [2024-08-14 06:51:55.651662] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:28.722 06:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:28.722 "name": "raid_bdev1", 00:20:28.722 "uuid": "3a3cc7b5-c6bf-4d91-bbfc-2308ee04554c", 00:20:28.722 "strip_size_kb": 0, 00:20:28.722 "state": "online", 00:20:28.722 "raid_level": "raid1", 00:20:28.722 "superblock": false, 00:20:28.722 "num_base_bdevs": 4, 00:20:28.722 "num_base_bdevs_discovered": 4, 00:20:28.722 "num_base_bdevs_operational": 4, 00:20:28.722 "process": { 00:20:28.722 "type": "rebuild", 00:20:28.722 "target": "spare", 00:20:28.722 "progress": { 00:20:28.722 "blocks": 14336, 00:20:28.722 "percent": 21 00:20:28.722 } 00:20:28.722 }, 00:20:28.722 "base_bdevs_list": [ 00:20:28.722 { 00:20:28.722 "name": "spare", 00:20:28.722 "uuid": "0aa4fdaa-8a2a-5a75-a106-55384d7371cf", 00:20:28.722 "is_configured": true, 00:20:28.722 "data_offset": 0, 00:20:28.722 "data_size": 65536 00:20:28.722 }, 00:20:28.722 { 00:20:28.722 "name": "BaseBdev2", 00:20:28.722 "uuid": "4faee53c-0fd3-5934-93ff-69b058c3103b", 00:20:28.722 "is_configured": true, 00:20:28.722 "data_offset": 0, 00:20:28.722 "data_size": 65536 00:20:28.722 }, 00:20:28.722 { 00:20:28.722 "name": "BaseBdev3", 00:20:28.722 "uuid": "de646eb4-5d9b-5616-b96a-ecfab9ceba92", 00:20:28.722 "is_configured": true, 00:20:28.722 "data_offset": 0, 00:20:28.722 "data_size": 65536 00:20:28.722 }, 00:20:28.722 { 00:20:28.722 "name": "BaseBdev4", 00:20:28.722 "uuid": "eca8c179-d885-5c80-b3ee-93f0bec4a7bb", 00:20:28.722 "is_configured": true, 00:20:28.722 "data_offset": 0, 00:20:28.722 "data_size": 65536 00:20:28.722 } 00:20:28.722 ] 00:20:28.722 }' 00:20:28.722 06:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:28.722 [2024-08-14 06:51:55.893772] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:28.722 06:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.722 06:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:28.722 06:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.722 06:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:28.992 [2024-08-14 06:51:56.148269] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:29.252 [2024-08-14 06:51:56.330856] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:29.252 [2024-08-14 06:51:56.342968] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.252 [2024-08-14 06:51:56.343177] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:29.252 [2024-08-14 06:51:56.343221] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:29.252 [2024-08-14 06:51:56.368026] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:20:29.252 06:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:29.252 06:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:29.252 06:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:29.252 06:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:29.252 06:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:29.252 06:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:29.252 06:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:29.252 06:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:29.252 06:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:29.252 06:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:29.252 06:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.252 06:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.512 06:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.512 "name": "raid_bdev1", 00:20:29.512 "uuid": "3a3cc7b5-c6bf-4d91-bbfc-2308ee04554c", 00:20:29.512 "strip_size_kb": 0, 00:20:29.512 "state": "online", 00:20:29.512 "raid_level": "raid1", 00:20:29.512 "superblock": false, 00:20:29.512 "num_base_bdevs": 4, 00:20:29.512 "num_base_bdevs_discovered": 3, 00:20:29.512 "num_base_bdevs_operational": 3, 00:20:29.512 "base_bdevs_list": [ 00:20:29.512 { 00:20:29.512 "name": null, 00:20:29.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.512 "is_configured": false, 00:20:29.512 "data_offset": 0, 00:20:29.512 "data_size": 65536 00:20:29.512 }, 00:20:29.512 { 00:20:29.512 "name": "BaseBdev2", 00:20:29.512 "uuid": "4faee53c-0fd3-5934-93ff-69b058c3103b", 00:20:29.512 "is_configured": true, 00:20:29.512 "data_offset": 0, 00:20:29.512 "data_size": 65536 00:20:29.512 }, 00:20:29.512 { 00:20:29.512 "name": "BaseBdev3", 00:20:29.512 "uuid": "de646eb4-5d9b-5616-b96a-ecfab9ceba92", 00:20:29.512 "is_configured": true, 00:20:29.512 "data_offset": 0, 00:20:29.512 "data_size": 65536 00:20:29.512 }, 00:20:29.512 { 00:20:29.512 "name": "BaseBdev4", 00:20:29.512 "uuid": "eca8c179-d885-5c80-b3ee-93f0bec4a7bb", 00:20:29.512 "is_configured": true, 00:20:29.512 "data_offset": 0, 00:20:29.512 "data_size": 65536 00:20:29.512 } 00:20:29.512 ] 00:20:29.512 }' 00:20:29.512 06:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.512 06:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:30.082 06:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:30.082 06:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:30.082 06:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:30.082 06:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:30.082 06:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:30.082 06:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.082 06:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.342 06:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:30.342 "name": "raid_bdev1", 00:20:30.342 "uuid": "3a3cc7b5-c6bf-4d91-bbfc-2308ee04554c", 00:20:30.342 "strip_size_kb": 0, 00:20:30.342 "state": "online", 00:20:30.342 "raid_level": "raid1", 00:20:30.342 "superblock": false, 00:20:30.342 "num_base_bdevs": 4, 00:20:30.342 "num_base_bdevs_discovered": 3, 00:20:30.342 "num_base_bdevs_operational": 3, 00:20:30.342 "base_bdevs_list": [ 00:20:30.342 { 00:20:30.342 "name": null, 00:20:30.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.342 "is_configured": false, 00:20:30.342 "data_offset": 0, 00:20:30.342 "data_size": 65536 00:20:30.342 }, 00:20:30.342 { 00:20:30.342 "name": "BaseBdev2", 00:20:30.342 "uuid": "4faee53c-0fd3-5934-93ff-69b058c3103b", 00:20:30.342 "is_configured": true, 00:20:30.342 "data_offset": 0, 00:20:30.342 "data_size": 65536 00:20:30.342 }, 00:20:30.342 { 00:20:30.342 "name": "BaseBdev3", 00:20:30.342 "uuid": "de646eb4-5d9b-5616-b96a-ecfab9ceba92", 00:20:30.342 "is_configured": true, 00:20:30.342 "data_offset": 0, 00:20:30.342 "data_size": 65536 00:20:30.342 }, 00:20:30.342 { 00:20:30.342 "name": "BaseBdev4", 00:20:30.342 "uuid": "eca8c179-d885-5c80-b3ee-93f0bec4a7bb", 00:20:30.342 "is_configured": true, 00:20:30.342 "data_offset": 0, 00:20:30.342 "data_size": 65536 00:20:30.342 } 00:20:30.342 ] 00:20:30.342 }' 00:20:30.342 06:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:30.600 06:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:30.600 06:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:30.600 06:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:30.600 06:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:30.859 [2024-08-14 06:51:57.884776] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:30.860 06:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:20:30.860 [2024-08-14 06:51:57.946797] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:20:30.860 [2024-08-14 06:51:57.949057] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:30.860 [2024-08-14 06:51:58.073613] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:30.860 [2024-08-14 06:51:58.075116] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:31.120 [2024-08-14 06:51:58.304404] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:31.120 [2024-08-14 06:51:58.304718] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:31.689 [2024-08-14 06:51:58.688831] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:31.689 [2024-08-14 06:51:58.942371] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:31.949 06:51:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.949 06:51:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:31.949 06:51:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:31.949 06:51:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:31.949 06:51:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:31.949 06:51:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.949 06:51:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.949 [2024-08-14 06:51:59.146531] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:31.949 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:31.949 "name": "raid_bdev1", 00:20:31.949 "uuid": "3a3cc7b5-c6bf-4d91-bbfc-2308ee04554c", 00:20:31.949 "strip_size_kb": 0, 00:20:31.949 "state": "online", 00:20:31.949 "raid_level": "raid1", 00:20:31.949 "superblock": false, 00:20:31.949 "num_base_bdevs": 4, 00:20:31.949 "num_base_bdevs_discovered": 4, 00:20:31.949 "num_base_bdevs_operational": 4, 00:20:31.949 "process": { 00:20:31.949 "type": "rebuild", 00:20:31.949 "target": "spare", 00:20:31.949 "progress": { 00:20:31.949 "blocks": 16384, 00:20:31.949 "percent": 25 00:20:31.949 } 00:20:31.949 }, 00:20:31.949 "base_bdevs_list": [ 00:20:31.949 { 00:20:31.949 "name": "spare", 00:20:31.949 "uuid": "0aa4fdaa-8a2a-5a75-a106-55384d7371cf", 00:20:31.949 "is_configured": true, 00:20:31.949 "data_offset": 0, 00:20:31.949 "data_size": 65536 00:20:31.949 }, 00:20:31.949 { 00:20:31.949 "name": "BaseBdev2", 00:20:31.949 "uuid": "4faee53c-0fd3-5934-93ff-69b058c3103b", 00:20:31.949 "is_configured": true, 00:20:31.949 "data_offset": 0, 00:20:31.949 "data_size": 65536 00:20:31.949 }, 00:20:31.949 { 00:20:31.949 "name": "BaseBdev3", 00:20:31.949 "uuid": "de646eb4-5d9b-5616-b96a-ecfab9ceba92", 00:20:31.949 "is_configured": true, 00:20:31.949 "data_offset": 0, 00:20:31.949 "data_size": 65536 00:20:31.949 }, 00:20:31.949 { 00:20:31.949 "name": "BaseBdev4", 00:20:31.949 "uuid": "eca8c179-d885-5c80-b3ee-93f0bec4a7bb", 00:20:31.949 "is_configured": true, 00:20:31.949 "data_offset": 0, 00:20:31.949 "data_size": 65536 00:20:31.949 } 00:20:31.949 ] 00:20:31.949 }' 00:20:31.949 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:32.209 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.209 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:32.209 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.209 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:20:32.209 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:20:32.209 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:20:32.209 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:20:32.209 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:32.209 [2024-08-14 06:51:59.462871] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:32.468 [2024-08-14 06:51:59.486904] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:32.468 [2024-08-14 06:51:59.670552] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:32.728 [2024-08-14 06:51:59.778706] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:20:32.728 [2024-08-14 06:51:59.778842] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:20:32.728 [2024-08-14 06:51:59.787340] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:32.728 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:20:32.728 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:20:32.728 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.728 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:32.728 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:32.728 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:32.728 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:32.728 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.728 06:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.988 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:32.988 "name": "raid_bdev1", 00:20:32.988 "uuid": "3a3cc7b5-c6bf-4d91-bbfc-2308ee04554c", 00:20:32.988 "strip_size_kb": 0, 00:20:32.988 "state": "online", 00:20:32.988 "raid_level": "raid1", 00:20:32.988 "superblock": false, 00:20:32.988 "num_base_bdevs": 4, 00:20:32.988 "num_base_bdevs_discovered": 3, 00:20:32.988 "num_base_bdevs_operational": 3, 00:20:32.988 "process": { 00:20:32.988 "type": "rebuild", 00:20:32.988 "target": "spare", 00:20:32.988 "progress": { 00:20:32.988 "blocks": 22528, 00:20:32.988 "percent": 34 00:20:32.988 } 00:20:32.988 }, 00:20:32.988 "base_bdevs_list": [ 00:20:32.988 { 00:20:32.988 "name": "spare", 00:20:32.988 "uuid": "0aa4fdaa-8a2a-5a75-a106-55384d7371cf", 00:20:32.988 "is_configured": true, 00:20:32.988 "data_offset": 0, 00:20:32.988 "data_size": 65536 00:20:32.988 }, 00:20:32.988 { 00:20:32.988 "name": null, 00:20:32.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.988 "is_configured": false, 00:20:32.988 "data_offset": 0, 00:20:32.988 "data_size": 65536 00:20:32.988 }, 00:20:32.988 { 00:20:32.988 "name": "BaseBdev3", 00:20:32.988 "uuid": "de646eb4-5d9b-5616-b96a-ecfab9ceba92", 00:20:32.988 "is_configured": true, 00:20:32.988 "data_offset": 0, 00:20:32.988 "data_size": 65536 00:20:32.988 }, 00:20:32.988 { 00:20:32.988 "name": "BaseBdev4", 00:20:32.988 "uuid": "eca8c179-d885-5c80-b3ee-93f0bec4a7bb", 00:20:32.988 "is_configured": true, 00:20:32.988 "data_offset": 0, 00:20:32.988 "data_size": 65536 00:20:32.988 } 00:20:32.988 ] 00:20:32.988 }' 00:20:32.988 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:32.988 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.988 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:32.988 [2024-08-14 06:52:00.141795] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:32.988 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.988 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=879 00:20:32.988 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:20:32.988 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.988 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:32.988 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:32.988 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:32.988 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:32.988 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.988 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.247 [2024-08-14 06:52:00.345455] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:33.247 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:33.247 "name": "raid_bdev1", 00:20:33.247 "uuid": "3a3cc7b5-c6bf-4d91-bbfc-2308ee04554c", 00:20:33.247 "strip_size_kb": 0, 00:20:33.247 "state": "online", 00:20:33.247 "raid_level": "raid1", 00:20:33.247 "superblock": false, 00:20:33.247 "num_base_bdevs": 4, 00:20:33.247 "num_base_bdevs_discovered": 3, 00:20:33.247 "num_base_bdevs_operational": 3, 00:20:33.247 "process": { 00:20:33.247 "type": "rebuild", 00:20:33.247 "target": "spare", 00:20:33.247 "progress": { 00:20:33.247 "blocks": 28672, 00:20:33.247 "percent": 43 00:20:33.247 } 00:20:33.247 }, 00:20:33.247 "base_bdevs_list": [ 00:20:33.247 { 00:20:33.247 "name": "spare", 00:20:33.247 "uuid": "0aa4fdaa-8a2a-5a75-a106-55384d7371cf", 00:20:33.247 "is_configured": true, 00:20:33.247 "data_offset": 0, 00:20:33.247 "data_size": 65536 00:20:33.247 }, 00:20:33.247 { 00:20:33.247 "name": null, 00:20:33.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.247 "is_configured": false, 00:20:33.247 "data_offset": 0, 00:20:33.247 "data_size": 65536 00:20:33.247 }, 00:20:33.247 { 00:20:33.247 "name": "BaseBdev3", 00:20:33.247 "uuid": "de646eb4-5d9b-5616-b96a-ecfab9ceba92", 00:20:33.247 "is_configured": true, 00:20:33.247 "data_offset": 0, 00:20:33.247 "data_size": 65536 00:20:33.247 }, 00:20:33.247 { 00:20:33.247 "name": "BaseBdev4", 00:20:33.247 "uuid": "eca8c179-d885-5c80-b3ee-93f0bec4a7bb", 00:20:33.247 "is_configured": true, 00:20:33.247 "data_offset": 0, 00:20:33.247 "data_size": 65536 00:20:33.247 } 00:20:33.247 ] 00:20:33.247 }' 00:20:33.247 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:33.247 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:33.247 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:33.247 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:33.247 06:52:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:20:33.869 [2024-08-14 06:52:00.780064] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:33.869 [2024-08-14 06:52:00.780740] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:34.437 06:52:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:20:34.437 06:52:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.437 06:52:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:34.437 06:52:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:34.437 06:52:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:34.437 06:52:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:34.437 06:52:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.437 06:52:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.437 [2024-08-14 06:52:01.469281] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:20:34.437 06:52:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:34.437 "name": "raid_bdev1", 00:20:34.437 "uuid": "3a3cc7b5-c6bf-4d91-bbfc-2308ee04554c", 00:20:34.437 "strip_size_kb": 0, 00:20:34.437 "state": "online", 00:20:34.437 "raid_level": "raid1", 00:20:34.437 "superblock": false, 00:20:34.437 "num_base_bdevs": 4, 00:20:34.437 "num_base_bdevs_discovered": 3, 00:20:34.437 "num_base_bdevs_operational": 3, 00:20:34.437 "process": { 00:20:34.437 "type": "rebuild", 00:20:34.437 "target": "spare", 00:20:34.437 "progress": { 00:20:34.437 "blocks": 45056, 00:20:34.437 "percent": 68 00:20:34.437 } 00:20:34.437 }, 00:20:34.437 "base_bdevs_list": [ 00:20:34.437 { 00:20:34.437 "name": "spare", 00:20:34.437 "uuid": "0aa4fdaa-8a2a-5a75-a106-55384d7371cf", 00:20:34.437 "is_configured": true, 00:20:34.437 "data_offset": 0, 00:20:34.437 "data_size": 65536 00:20:34.437 }, 00:20:34.437 { 00:20:34.437 "name": null, 00:20:34.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.437 "is_configured": false, 00:20:34.437 "data_offset": 0, 00:20:34.437 "data_size": 65536 00:20:34.437 }, 00:20:34.437 { 00:20:34.437 "name": "BaseBdev3", 00:20:34.437 "uuid": "de646eb4-5d9b-5616-b96a-ecfab9ceba92", 00:20:34.437 "is_configured": true, 00:20:34.437 "data_offset": 0, 00:20:34.437 "data_size": 65536 00:20:34.437 }, 00:20:34.437 { 00:20:34.437 "name": "BaseBdev4", 00:20:34.437 "uuid": "eca8c179-d885-5c80-b3ee-93f0bec4a7bb", 00:20:34.437 "is_configured": true, 00:20:34.437 "data_offset": 0, 00:20:34.437 "data_size": 65536 00:20:34.437 } 00:20:34.437 ] 00:20:34.437 }' 00:20:34.437 06:52:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:34.697 06:52:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:34.697 06:52:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:34.697 06:52:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:34.697 06:52:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:20:34.697 [2024-08-14 06:52:01.914007] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:34.956 [2024-08-14 06:52:02.023680] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:35.215 [2024-08-14 06:52:02.238446] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:20:35.473 [2024-08-14 06:52:02.681795] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:35.733 [2024-08-14 06:52:02.788507] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:35.733 06:52:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:20:35.733 06:52:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:35.733 06:52:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:35.733 06:52:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:35.733 [2024-08-14 06:52:02.791359] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.733 06:52:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:35.733 06:52:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:35.733 06:52:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.733 06:52:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.993 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:35.993 "name": "raid_bdev1", 00:20:35.993 "uuid": "3a3cc7b5-c6bf-4d91-bbfc-2308ee04554c", 00:20:35.993 "strip_size_kb": 0, 00:20:35.993 "state": "online", 00:20:35.993 "raid_level": "raid1", 00:20:35.993 "superblock": false, 00:20:35.993 "num_base_bdevs": 4, 00:20:35.993 "num_base_bdevs_discovered": 3, 00:20:35.993 "num_base_bdevs_operational": 3, 00:20:35.993 "base_bdevs_list": [ 00:20:35.993 { 00:20:35.993 "name": "spare", 00:20:35.993 "uuid": "0aa4fdaa-8a2a-5a75-a106-55384d7371cf", 00:20:35.993 "is_configured": true, 00:20:35.993 "data_offset": 0, 00:20:35.993 "data_size": 65536 00:20:35.993 }, 00:20:35.993 { 00:20:35.993 "name": null, 00:20:35.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.993 "is_configured": false, 00:20:35.993 "data_offset": 0, 00:20:35.993 "data_size": 65536 00:20:35.993 }, 00:20:35.993 { 00:20:35.993 "name": "BaseBdev3", 00:20:35.993 "uuid": "de646eb4-5d9b-5616-b96a-ecfab9ceba92", 00:20:35.993 "is_configured": true, 00:20:35.993 "data_offset": 0, 00:20:35.993 "data_size": 65536 00:20:35.993 }, 00:20:35.993 { 00:20:35.993 "name": "BaseBdev4", 00:20:35.993 "uuid": "eca8c179-d885-5c80-b3ee-93f0bec4a7bb", 00:20:35.993 "is_configured": true, 00:20:35.993 "data_offset": 0, 00:20:35.993 "data_size": 65536 00:20:35.993 } 00:20:35.993 ] 00:20:35.993 }' 00:20:35.993 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:35.993 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:35.993 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:35.993 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:20:35.993 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:20:35.993 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:35.993 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:35.993 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:35.993 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:35.993 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:35.993 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.993 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.253 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:36.253 "name": "raid_bdev1", 00:20:36.253 "uuid": "3a3cc7b5-c6bf-4d91-bbfc-2308ee04554c", 00:20:36.253 "strip_size_kb": 0, 00:20:36.253 "state": "online", 00:20:36.253 "raid_level": "raid1", 00:20:36.253 "superblock": false, 00:20:36.253 "num_base_bdevs": 4, 00:20:36.253 "num_base_bdevs_discovered": 3, 00:20:36.253 "num_base_bdevs_operational": 3, 00:20:36.253 "base_bdevs_list": [ 00:20:36.253 { 00:20:36.253 "name": "spare", 00:20:36.253 "uuid": "0aa4fdaa-8a2a-5a75-a106-55384d7371cf", 00:20:36.253 "is_configured": true, 00:20:36.253 "data_offset": 0, 00:20:36.253 "data_size": 65536 00:20:36.253 }, 00:20:36.253 { 00:20:36.253 "name": null, 00:20:36.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.253 "is_configured": false, 00:20:36.253 "data_offset": 0, 00:20:36.253 "data_size": 65536 00:20:36.253 }, 00:20:36.253 { 00:20:36.253 "name": "BaseBdev3", 00:20:36.253 "uuid": "de646eb4-5d9b-5616-b96a-ecfab9ceba92", 00:20:36.253 "is_configured": true, 00:20:36.253 "data_offset": 0, 00:20:36.253 "data_size": 65536 00:20:36.253 }, 00:20:36.253 { 00:20:36.253 "name": "BaseBdev4", 00:20:36.253 "uuid": "eca8c179-d885-5c80-b3ee-93f0bec4a7bb", 00:20:36.253 "is_configured": true, 00:20:36.253 "data_offset": 0, 00:20:36.253 "data_size": 65536 00:20:36.253 } 00:20:36.253 ] 00:20:36.253 }' 00:20:36.253 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:36.253 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:36.253 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:36.253 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:36.253 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:36.254 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:36.254 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:36.254 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:36.254 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:36.254 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:36.254 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:36.254 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:36.254 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:36.254 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:36.254 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.254 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.512 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:36.512 "name": "raid_bdev1", 00:20:36.512 "uuid": "3a3cc7b5-c6bf-4d91-bbfc-2308ee04554c", 00:20:36.512 "strip_size_kb": 0, 00:20:36.512 "state": "online", 00:20:36.512 "raid_level": "raid1", 00:20:36.512 "superblock": false, 00:20:36.512 "num_base_bdevs": 4, 00:20:36.512 "num_base_bdevs_discovered": 3, 00:20:36.513 "num_base_bdevs_operational": 3, 00:20:36.513 "base_bdevs_list": [ 00:20:36.513 { 00:20:36.513 "name": "spare", 00:20:36.513 "uuid": "0aa4fdaa-8a2a-5a75-a106-55384d7371cf", 00:20:36.513 "is_configured": true, 00:20:36.513 "data_offset": 0, 00:20:36.513 "data_size": 65536 00:20:36.513 }, 00:20:36.513 { 00:20:36.513 "name": null, 00:20:36.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.513 "is_configured": false, 00:20:36.513 "data_offset": 0, 00:20:36.513 "data_size": 65536 00:20:36.513 }, 00:20:36.513 { 00:20:36.513 "name": "BaseBdev3", 00:20:36.513 "uuid": "de646eb4-5d9b-5616-b96a-ecfab9ceba92", 00:20:36.513 "is_configured": true, 00:20:36.513 "data_offset": 0, 00:20:36.513 "data_size": 65536 00:20:36.513 }, 00:20:36.513 { 00:20:36.513 "name": "BaseBdev4", 00:20:36.513 "uuid": "eca8c179-d885-5c80-b3ee-93f0bec4a7bb", 00:20:36.513 "is_configured": true, 00:20:36.513 "data_offset": 0, 00:20:36.513 "data_size": 65536 00:20:36.513 } 00:20:36.513 ] 00:20:36.513 }' 00:20:36.513 06:52:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:36.513 06:52:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.080 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:37.339 [2024-08-14 06:52:04.536353] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:37.340 [2024-08-14 06:52:04.536474] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:37.598 00:20:37.598 Latency(us) 00:20:37.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.598 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:37.598 raid_bdev1 : 11.32 99.28 297.85 0.00 0.00 13807.61 311.22 119052.30 00:20:37.598 =================================================================================================================== 00:20:37.598 Total : 99.28 297.85 0.00 0.00 13807.61 311.22 119052.30 00:20:37.598 [2024-08-14 06:52:04.636194] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.598 [2024-08-14 06:52:04.636298] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.598 [2024-08-14 06:52:04.636455] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.598 [2024-08-14 06:52:04.636532] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:20:37.598 0 00:20:37.598 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.598 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:20:37.858 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:20:37.858 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:20:37.858 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:20:37.858 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:37.858 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:37.858 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:37.858 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:37.858 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:37.858 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:37.858 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:37.858 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:37.858 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:37.858 06:52:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:38.117 /dev/nbd0 00:20:38.117 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:38.117 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:38.117 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:20:38.117 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:20:38.117 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:38.117 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:38.118 1+0 records in 00:20:38.118 1+0 records out 00:20:38.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322744 s, 12.7 MB/s 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # continue 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:38.118 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:20:38.378 /dev/nbd1 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:38.378 1+0 records in 00:20:38.378 1+0 records out 00:20:38.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277571 s, 14.8 MB/s 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:38.378 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:38.638 06:52:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:20:38.897 /dev/nbd1 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:38.897 1+0 records in 00:20:38.897 1+0 records out 00:20:38.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523997 s, 7.8 MB/s 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:38.897 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:39.156 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:39.156 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:39.156 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:39.156 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:39.156 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:39.156 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:39.156 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:39.415 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:39.415 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:39.415 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:39.415 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:39.415 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:39.415 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:39.415 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:39.415 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:39.415 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:39.415 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:39.415 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:39.415 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:39.415 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:39.415 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:39.415 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 96520 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@946 -- # '[' -z 96520 ']' 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # kill -0 96520 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # uname 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96520 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96520' 00:20:39.674 killing process with pid 96520 00:20:39.674 Received shutdown signal, test time was about 13.453592 seconds 00:20:39.674 00:20:39.674 Latency(us) 00:20:39.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.674 =================================================================================================================== 00:20:39.674 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@965 -- # kill 96520 00:20:39.674 [2024-08-14 06:52:06.760371] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:39.674 06:52:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # wait 96520 00:20:39.674 [2024-08-14 06:52:06.807461] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:39.933 06:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:20:39.933 00:20:39.933 real 0m18.990s 00:20:39.933 user 0m29.907s 00:20:39.933 sys 0m2.721s 00:20:39.933 06:52:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:39.933 ************************************ 00:20:39.933 END TEST raid_rebuild_test_io 00:20:39.933 ************************************ 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:39.934 06:52:07 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:20:39.934 06:52:07 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:20:39.934 06:52:07 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:39.934 06:52:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.934 ************************************ 00:20:39.934 START TEST raid_rebuild_test_sb_io 00:20:39.934 ************************************ 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 true true true 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=96990 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 96990 /var/tmp/spdk-raid.sock 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@827 -- # '[' -z 96990 ']' 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:39.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:39.934 06:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:40.194 [2024-08-14 06:52:07.198380] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:20:40.194 [2024-08-14 06:52:07.198618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96990 ] 00:20:40.194 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:40.194 Zero copy mechanism will not be used. 00:20:40.194 [2024-08-14 06:52:07.346681] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.194 [2024-08-14 06:52:07.400881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.194 [2024-08-14 06:52:07.446036] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:40.194 [2024-08-14 06:52:07.446203] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:41.131 06:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:41.131 06:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # return 0 00:20:41.131 06:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:20:41.131 06:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:41.131 BaseBdev1_malloc 00:20:41.131 06:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:41.390 [2024-08-14 06:52:08.587949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:41.390 [2024-08-14 06:52:08.588147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.390 [2024-08-14 06:52:08.588206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:20:41.390 [2024-08-14 06:52:08.588248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.390 [2024-08-14 06:52:08.590911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.390 [2024-08-14 06:52:08.591040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:41.390 BaseBdev1 00:20:41.390 06:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:20:41.390 06:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:41.650 BaseBdev2_malloc 00:20:41.650 06:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:41.909 [2024-08-14 06:52:09.072594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:41.909 [2024-08-14 06:52:09.072773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.909 [2024-08-14 06:52:09.072827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:41.909 [2024-08-14 06:52:09.072869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.909 [2024-08-14 06:52:09.075455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.909 [2024-08-14 06:52:09.075556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:41.909 BaseBdev2 00:20:41.909 06:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:20:41.909 06:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:42.168 BaseBdev3_malloc 00:20:42.168 06:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:42.427 [2024-08-14 06:52:09.554771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:42.427 [2024-08-14 06:52:09.554946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.427 [2024-08-14 06:52:09.555012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:42.427 [2024-08-14 06:52:09.555050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.427 [2024-08-14 06:52:09.557522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.427 [2024-08-14 06:52:09.557611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:42.427 BaseBdev3 00:20:42.427 06:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:20:42.427 06:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:42.686 BaseBdev4_malloc 00:20:42.686 06:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:42.945 [2024-08-14 06:52:09.995347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:42.945 [2024-08-14 06:52:09.995434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.945 [2024-08-14 06:52:09.995468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:42.945 [2024-08-14 06:52:09.995486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.945 [2024-08-14 06:52:09.997943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.945 [2024-08-14 06:52:09.997988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:42.945 BaseBdev4 00:20:42.945 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:43.204 spare_malloc 00:20:43.204 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:43.463 spare_delay 00:20:43.463 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:43.463 [2024-08-14 06:52:10.711377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:43.463 [2024-08-14 06:52:10.711464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.463 [2024-08-14 06:52:10.711492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:43.463 [2024-08-14 06:52:10.711505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.463 [2024-08-14 06:52:10.713999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.463 [2024-08-14 06:52:10.714053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:43.463 spare 00:20:43.722 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:43.722 [2024-08-14 06:52:10.967056] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:43.722 [2024-08-14 06:52:10.969279] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:43.722 [2024-08-14 06:52:10.969358] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:43.722 [2024-08-14 06:52:10.969413] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:43.722 [2024-08-14 06:52:10.969630] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:20:43.722 [2024-08-14 06:52:10.969646] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:43.722 [2024-08-14 06:52:10.970041] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:20:43.722 [2024-08-14 06:52:10.970233] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:20:43.722 [2024-08-14 06:52:10.970247] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:20:43.722 [2024-08-14 06:52:10.970431] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.981 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:43.981 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:43.981 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:43.981 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:43.982 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:43.982 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:43.982 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:43.982 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:43.982 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:43.982 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:43.982 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.982 06:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.982 06:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:43.982 "name": "raid_bdev1", 00:20:43.982 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:20:43.982 "strip_size_kb": 0, 00:20:43.982 "state": "online", 00:20:43.982 "raid_level": "raid1", 00:20:43.982 "superblock": true, 00:20:43.982 "num_base_bdevs": 4, 00:20:43.982 "num_base_bdevs_discovered": 4, 00:20:43.982 "num_base_bdevs_operational": 4, 00:20:43.982 "base_bdevs_list": [ 00:20:43.982 { 00:20:43.982 "name": "BaseBdev1", 00:20:43.982 "uuid": "d412fb3b-d0d3-59ba-8aef-edb68815b525", 00:20:43.982 "is_configured": true, 00:20:43.982 "data_offset": 2048, 00:20:43.982 "data_size": 63488 00:20:43.982 }, 00:20:43.982 { 00:20:43.982 "name": "BaseBdev2", 00:20:43.982 "uuid": "a67dfed0-3965-5301-94e5-8eb60bbe3bda", 00:20:43.982 "is_configured": true, 00:20:43.982 "data_offset": 2048, 00:20:43.982 "data_size": 63488 00:20:43.982 }, 00:20:43.982 { 00:20:43.982 "name": "BaseBdev3", 00:20:43.982 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:20:43.982 "is_configured": true, 00:20:43.982 "data_offset": 2048, 00:20:43.982 "data_size": 63488 00:20:43.982 }, 00:20:43.982 { 00:20:43.982 "name": "BaseBdev4", 00:20:43.982 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:20:43.982 "is_configured": true, 00:20:43.982 "data_offset": 2048, 00:20:43.982 "data_size": 63488 00:20:43.982 } 00:20:43.982 ] 00:20:43.982 }' 00:20:43.982 06:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:43.982 06:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:44.918 06:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:44.918 06:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:20:44.918 [2024-08-14 06:52:12.078430] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:44.918 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:20:44.918 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.918 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:45.177 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:20:45.177 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:20:45.177 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:45.177 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:45.435 [2024-08-14 06:52:12.500115] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:20:45.435 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:45.435 Zero copy mechanism will not be used. 00:20:45.435 Running I/O for 60 seconds... 00:20:45.436 [2024-08-14 06:52:12.573769] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:45.436 [2024-08-14 06:52:12.580468] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:20:45.436 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:45.436 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:45.436 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:45.436 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:45.436 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:45.436 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:45.436 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.436 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.436 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.436 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:45.436 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.436 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.694 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.694 "name": "raid_bdev1", 00:20:45.694 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:20:45.694 "strip_size_kb": 0, 00:20:45.694 "state": "online", 00:20:45.694 "raid_level": "raid1", 00:20:45.694 "superblock": true, 00:20:45.694 "num_base_bdevs": 4, 00:20:45.694 "num_base_bdevs_discovered": 3, 00:20:45.694 "num_base_bdevs_operational": 3, 00:20:45.694 "base_bdevs_list": [ 00:20:45.694 { 00:20:45.694 "name": null, 00:20:45.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.694 "is_configured": false, 00:20:45.694 "data_offset": 2048, 00:20:45.694 "data_size": 63488 00:20:45.694 }, 00:20:45.694 { 00:20:45.694 "name": "BaseBdev2", 00:20:45.694 "uuid": "a67dfed0-3965-5301-94e5-8eb60bbe3bda", 00:20:45.694 "is_configured": true, 00:20:45.694 "data_offset": 2048, 00:20:45.694 "data_size": 63488 00:20:45.694 }, 00:20:45.694 { 00:20:45.694 "name": "BaseBdev3", 00:20:45.694 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:20:45.694 "is_configured": true, 00:20:45.694 "data_offset": 2048, 00:20:45.694 "data_size": 63488 00:20:45.694 }, 00:20:45.694 { 00:20:45.694 "name": "BaseBdev4", 00:20:45.694 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:20:45.694 "is_configured": true, 00:20:45.694 "data_offset": 2048, 00:20:45.694 "data_size": 63488 00:20:45.694 } 00:20:45.694 ] 00:20:45.694 }' 00:20:45.694 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.694 06:52:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:46.260 06:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:46.518 [2024-08-14 06:52:13.750528] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:46.776 06:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:46.776 [2024-08-14 06:52:13.834051] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:20:46.776 [2024-08-14 06:52:13.836466] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:46.776 [2024-08-14 06:52:13.944156] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:46.776 [2024-08-14 06:52:13.944856] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:47.035 [2024-08-14 06:52:14.158128] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:47.035 [2024-08-14 06:52:14.158995] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:47.602 06:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.602 06:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:47.602 06:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:47.602 06:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:47.602 06:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:47.602 06:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.602 06:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.861 [2024-08-14 06:52:15.031839] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:47.861 [2024-08-14 06:52:15.032210] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:47.861 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:47.861 "name": "raid_bdev1", 00:20:47.861 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:20:47.861 "strip_size_kb": 0, 00:20:47.861 "state": "online", 00:20:47.861 "raid_level": "raid1", 00:20:47.861 "superblock": true, 00:20:47.861 "num_base_bdevs": 4, 00:20:47.861 "num_base_bdevs_discovered": 4, 00:20:47.861 "num_base_bdevs_operational": 4, 00:20:47.861 "process": { 00:20:47.861 "type": "rebuild", 00:20:47.861 "target": "spare", 00:20:47.861 "progress": { 00:20:47.861 "blocks": 16384, 00:20:47.861 "percent": 25 00:20:47.861 } 00:20:47.861 }, 00:20:47.861 "base_bdevs_list": [ 00:20:47.861 { 00:20:47.861 "name": "spare", 00:20:47.861 "uuid": "a26e8cd2-d817-5922-9d7e-b45f72f3bd58", 00:20:47.861 "is_configured": true, 00:20:47.861 "data_offset": 2048, 00:20:47.861 "data_size": 63488 00:20:47.861 }, 00:20:47.861 { 00:20:47.861 "name": "BaseBdev2", 00:20:47.861 "uuid": "a67dfed0-3965-5301-94e5-8eb60bbe3bda", 00:20:47.861 "is_configured": true, 00:20:47.861 "data_offset": 2048, 00:20:47.861 "data_size": 63488 00:20:47.861 }, 00:20:47.861 { 00:20:47.861 "name": "BaseBdev3", 00:20:47.861 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:20:47.861 "is_configured": true, 00:20:47.861 "data_offset": 2048, 00:20:47.861 "data_size": 63488 00:20:47.861 }, 00:20:47.861 { 00:20:47.861 "name": "BaseBdev4", 00:20:47.861 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:20:47.861 "is_configured": true, 00:20:47.861 "data_offset": 2048, 00:20:47.861 "data_size": 63488 00:20:47.861 } 00:20:47.861 ] 00:20:47.861 }' 00:20:47.861 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:48.120 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:48.120 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:48.120 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.120 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:48.120 [2024-08-14 06:52:15.296943] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:48.120 [2024-08-14 06:52:15.297633] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:48.380 [2024-08-14 06:52:15.423271] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:48.380 [2024-08-14 06:52:15.509949] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:48.380 [2024-08-14 06:52:15.612592] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:48.380 [2024-08-14 06:52:15.616503] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.380 [2024-08-14 06:52:15.616654] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:48.380 [2024-08-14 06:52:15.616694] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:48.380 [2024-08-14 06:52:15.628866] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:20:48.640 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:48.640 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:48.640 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:48.640 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:48.640 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:48.640 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:48.640 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.640 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.640 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.640 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:48.640 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.640 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.900 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.900 "name": "raid_bdev1", 00:20:48.900 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:20:48.900 "strip_size_kb": 0, 00:20:48.900 "state": "online", 00:20:48.900 "raid_level": "raid1", 00:20:48.900 "superblock": true, 00:20:48.900 "num_base_bdevs": 4, 00:20:48.900 "num_base_bdevs_discovered": 3, 00:20:48.900 "num_base_bdevs_operational": 3, 00:20:48.900 "base_bdevs_list": [ 00:20:48.900 { 00:20:48.900 "name": null, 00:20:48.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.900 "is_configured": false, 00:20:48.900 "data_offset": 2048, 00:20:48.900 "data_size": 63488 00:20:48.900 }, 00:20:48.900 { 00:20:48.900 "name": "BaseBdev2", 00:20:48.900 "uuid": "a67dfed0-3965-5301-94e5-8eb60bbe3bda", 00:20:48.900 "is_configured": true, 00:20:48.900 "data_offset": 2048, 00:20:48.900 "data_size": 63488 00:20:48.900 }, 00:20:48.900 { 00:20:48.900 "name": "BaseBdev3", 00:20:48.900 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:20:48.900 "is_configured": true, 00:20:48.900 "data_offset": 2048, 00:20:48.900 "data_size": 63488 00:20:48.900 }, 00:20:48.900 { 00:20:48.900 "name": "BaseBdev4", 00:20:48.900 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:20:48.900 "is_configured": true, 00:20:48.900 "data_offset": 2048, 00:20:48.900 "data_size": 63488 00:20:48.900 } 00:20:48.900 ] 00:20:48.900 }' 00:20:48.900 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.900 06:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:49.468 06:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:49.468 06:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:49.468 06:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:49.468 06:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:49.468 06:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:49.468 06:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.468 06:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.728 06:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:49.728 "name": "raid_bdev1", 00:20:49.728 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:20:49.728 "strip_size_kb": 0, 00:20:49.728 "state": "online", 00:20:49.728 "raid_level": "raid1", 00:20:49.728 "superblock": true, 00:20:49.728 "num_base_bdevs": 4, 00:20:49.728 "num_base_bdevs_discovered": 3, 00:20:49.728 "num_base_bdevs_operational": 3, 00:20:49.728 "base_bdevs_list": [ 00:20:49.728 { 00:20:49.728 "name": null, 00:20:49.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.728 "is_configured": false, 00:20:49.728 "data_offset": 2048, 00:20:49.728 "data_size": 63488 00:20:49.728 }, 00:20:49.728 { 00:20:49.728 "name": "BaseBdev2", 00:20:49.728 "uuid": "a67dfed0-3965-5301-94e5-8eb60bbe3bda", 00:20:49.728 "is_configured": true, 00:20:49.728 "data_offset": 2048, 00:20:49.728 "data_size": 63488 00:20:49.728 }, 00:20:49.728 { 00:20:49.728 "name": "BaseBdev3", 00:20:49.728 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:20:49.728 "is_configured": true, 00:20:49.728 "data_offset": 2048, 00:20:49.728 "data_size": 63488 00:20:49.728 }, 00:20:49.728 { 00:20:49.728 "name": "BaseBdev4", 00:20:49.728 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:20:49.728 "is_configured": true, 00:20:49.728 "data_offset": 2048, 00:20:49.728 "data_size": 63488 00:20:49.728 } 00:20:49.728 ] 00:20:49.728 }' 00:20:49.728 06:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:49.728 06:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:49.728 06:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:49.728 06:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:49.728 06:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:50.044 [2024-08-14 06:52:17.124302] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:50.044 [2024-08-14 06:52:17.163928] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:20:50.044 [2024-08-14 06:52:17.166091] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:50.044 06:52:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:20:50.302 [2024-08-14 06:52:17.308150] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:50.302 [2024-08-14 06:52:17.547763] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:50.302 [2024-08-14 06:52:17.548229] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:50.868 [2024-08-14 06:52:18.008251] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:50.868 [2024-08-14 06:52:18.009036] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:51.126 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.126 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:51.126 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:51.126 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:51.126 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:51.126 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.126 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.126 [2024-08-14 06:52:18.369042] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:51.126 [2024-08-14 06:52:18.370551] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:51.385 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:51.385 "name": "raid_bdev1", 00:20:51.385 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:20:51.385 "strip_size_kb": 0, 00:20:51.385 "state": "online", 00:20:51.385 "raid_level": "raid1", 00:20:51.385 "superblock": true, 00:20:51.385 "num_base_bdevs": 4, 00:20:51.385 "num_base_bdevs_discovered": 4, 00:20:51.385 "num_base_bdevs_operational": 4, 00:20:51.385 "process": { 00:20:51.385 "type": "rebuild", 00:20:51.385 "target": "spare", 00:20:51.385 "progress": { 00:20:51.385 "blocks": 14336, 00:20:51.385 "percent": 22 00:20:51.385 } 00:20:51.385 }, 00:20:51.385 "base_bdevs_list": [ 00:20:51.385 { 00:20:51.385 "name": "spare", 00:20:51.385 "uuid": "a26e8cd2-d817-5922-9d7e-b45f72f3bd58", 00:20:51.385 "is_configured": true, 00:20:51.385 "data_offset": 2048, 00:20:51.385 "data_size": 63488 00:20:51.385 }, 00:20:51.385 { 00:20:51.385 "name": "BaseBdev2", 00:20:51.385 "uuid": "a67dfed0-3965-5301-94e5-8eb60bbe3bda", 00:20:51.385 "is_configured": true, 00:20:51.385 "data_offset": 2048, 00:20:51.385 "data_size": 63488 00:20:51.385 }, 00:20:51.385 { 00:20:51.385 "name": "BaseBdev3", 00:20:51.385 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:20:51.385 "is_configured": true, 00:20:51.385 "data_offset": 2048, 00:20:51.385 "data_size": 63488 00:20:51.385 }, 00:20:51.385 { 00:20:51.385 "name": "BaseBdev4", 00:20:51.385 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:20:51.385 "is_configured": true, 00:20:51.385 "data_offset": 2048, 00:20:51.385 "data_size": 63488 00:20:51.385 } 00:20:51.385 ] 00:20:51.385 }' 00:20:51.385 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:51.385 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:51.385 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:51.385 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:51.385 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:20:51.385 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:20:51.385 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:20:51.385 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:20:51.385 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:20:51.385 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:20:51.385 06:52:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:51.385 [2024-08-14 06:52:18.579323] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:51.385 [2024-08-14 06:52:18.580035] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:51.643 [2024-08-14 06:52:18.708847] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:51.902 [2024-08-14 06:52:19.002067] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:20:51.902 [2024-08-14 06:52:19.002138] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:20:51.902 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:20:51.902 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:20:51.902 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.902 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:51.902 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:51.902 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:51.902 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:51.902 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.902 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.902 [2024-08-14 06:52:19.106513] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:51.902 [2024-08-14 06:52:19.107087] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:52.160 [2024-08-14 06:52:19.225037] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:52.160 [2024-08-14 06:52:19.225396] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:52.160 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:52.160 "name": "raid_bdev1", 00:20:52.160 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:20:52.160 "strip_size_kb": 0, 00:20:52.160 "state": "online", 00:20:52.160 "raid_level": "raid1", 00:20:52.160 "superblock": true, 00:20:52.160 "num_base_bdevs": 4, 00:20:52.160 "num_base_bdevs_discovered": 3, 00:20:52.160 "num_base_bdevs_operational": 3, 00:20:52.160 "process": { 00:20:52.160 "type": "rebuild", 00:20:52.160 "target": "spare", 00:20:52.160 "progress": { 00:20:52.160 "blocks": 22528, 00:20:52.160 "percent": 35 00:20:52.160 } 00:20:52.160 }, 00:20:52.160 "base_bdevs_list": [ 00:20:52.160 { 00:20:52.160 "name": "spare", 00:20:52.160 "uuid": "a26e8cd2-d817-5922-9d7e-b45f72f3bd58", 00:20:52.160 "is_configured": true, 00:20:52.160 "data_offset": 2048, 00:20:52.160 "data_size": 63488 00:20:52.160 }, 00:20:52.160 { 00:20:52.160 "name": null, 00:20:52.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.160 "is_configured": false, 00:20:52.160 "data_offset": 2048, 00:20:52.160 "data_size": 63488 00:20:52.160 }, 00:20:52.160 { 00:20:52.160 "name": "BaseBdev3", 00:20:52.160 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:20:52.160 "is_configured": true, 00:20:52.160 "data_offset": 2048, 00:20:52.160 "data_size": 63488 00:20:52.160 }, 00:20:52.160 { 00:20:52.160 "name": "BaseBdev4", 00:20:52.160 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:20:52.160 "is_configured": true, 00:20:52.160 "data_offset": 2048, 00:20:52.160 "data_size": 63488 00:20:52.160 } 00:20:52.160 ] 00:20:52.160 }' 00:20:52.160 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:52.160 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.160 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:52.160 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.160 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=898 00:20:52.160 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:20:52.160 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.160 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:52.160 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:52.160 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:52.160 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:52.161 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.161 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.418 [2024-08-14 06:52:19.476899] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:52.418 [2024-08-14 06:52:19.477971] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:52.418 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:52.418 "name": "raid_bdev1", 00:20:52.419 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:20:52.419 "strip_size_kb": 0, 00:20:52.419 "state": "online", 00:20:52.419 "raid_level": "raid1", 00:20:52.419 "superblock": true, 00:20:52.419 "num_base_bdevs": 4, 00:20:52.419 "num_base_bdevs_discovered": 3, 00:20:52.419 "num_base_bdevs_operational": 3, 00:20:52.419 "process": { 00:20:52.419 "type": "rebuild", 00:20:52.419 "target": "spare", 00:20:52.419 "progress": { 00:20:52.419 "blocks": 26624, 00:20:52.419 "percent": 41 00:20:52.419 } 00:20:52.419 }, 00:20:52.419 "base_bdevs_list": [ 00:20:52.419 { 00:20:52.419 "name": "spare", 00:20:52.419 "uuid": "a26e8cd2-d817-5922-9d7e-b45f72f3bd58", 00:20:52.419 "is_configured": true, 00:20:52.419 "data_offset": 2048, 00:20:52.419 "data_size": 63488 00:20:52.419 }, 00:20:52.419 { 00:20:52.419 "name": null, 00:20:52.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.419 "is_configured": false, 00:20:52.419 "data_offset": 2048, 00:20:52.419 "data_size": 63488 00:20:52.419 }, 00:20:52.419 { 00:20:52.419 "name": "BaseBdev3", 00:20:52.419 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:20:52.419 "is_configured": true, 00:20:52.419 "data_offset": 2048, 00:20:52.419 "data_size": 63488 00:20:52.419 }, 00:20:52.419 { 00:20:52.419 "name": "BaseBdev4", 00:20:52.419 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:20:52.419 "is_configured": true, 00:20:52.419 "data_offset": 2048, 00:20:52.419 "data_size": 63488 00:20:52.419 } 00:20:52.419 ] 00:20:52.419 }' 00:20:52.419 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:52.677 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.677 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:52.677 [2024-08-14 06:52:19.704975] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:52.677 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.677 06:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:20:52.934 [2024-08-14 06:52:20.060536] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:53.192 [2024-08-14 06:52:20.410386] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:53.759 06:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:20:53.759 06:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.759 06:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:53.759 06:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:53.759 06:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:53.759 06:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:53.759 06:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.759 06:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.759 [2024-08-14 06:52:20.853730] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:20:53.759 [2024-08-14 06:52:20.854296] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:20:53.759 06:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:53.759 "name": "raid_bdev1", 00:20:53.759 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:20:53.759 "strip_size_kb": 0, 00:20:53.759 "state": "online", 00:20:53.759 "raid_level": "raid1", 00:20:53.759 "superblock": true, 00:20:53.759 "num_base_bdevs": 4, 00:20:53.759 "num_base_bdevs_discovered": 3, 00:20:53.759 "num_base_bdevs_operational": 3, 00:20:53.759 "process": { 00:20:53.759 "type": "rebuild", 00:20:53.759 "target": "spare", 00:20:53.759 "progress": { 00:20:53.759 "blocks": 45056, 00:20:53.759 "percent": 70 00:20:53.759 } 00:20:53.759 }, 00:20:53.759 "base_bdevs_list": [ 00:20:53.759 { 00:20:53.759 "name": "spare", 00:20:53.759 "uuid": "a26e8cd2-d817-5922-9d7e-b45f72f3bd58", 00:20:53.759 "is_configured": true, 00:20:53.759 "data_offset": 2048, 00:20:53.759 "data_size": 63488 00:20:53.759 }, 00:20:53.759 { 00:20:53.759 "name": null, 00:20:53.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.759 "is_configured": false, 00:20:53.759 "data_offset": 2048, 00:20:53.759 "data_size": 63488 00:20:53.759 }, 00:20:53.759 { 00:20:53.759 "name": "BaseBdev3", 00:20:53.759 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:20:53.759 "is_configured": true, 00:20:53.759 "data_offset": 2048, 00:20:53.759 "data_size": 63488 00:20:53.759 }, 00:20:53.759 { 00:20:53.759 "name": "BaseBdev4", 00:20:53.759 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:20:53.759 "is_configured": true, 00:20:53.759 "data_offset": 2048, 00:20:53.759 "data_size": 63488 00:20:53.759 } 00:20:53.759 ] 00:20:53.759 }' 00:20:53.759 06:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:54.019 06:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.019 06:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:54.019 06:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.019 06:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:20:54.589 [2024-08-14 06:52:21.636361] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:20:54.589 [2024-08-14 06:52:21.637283] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:20:54.849 [2024-08-14 06:52:21.854492] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:20:54.849 [2024-08-14 06:52:22.090073] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:54.849 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:20:54.849 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.849 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:54.849 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:54.849 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:54.849 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:54.849 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.849 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.109 [2024-08-14 06:52:22.196769] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:55.109 [2024-08-14 06:52:22.199750] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.109 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:55.109 "name": "raid_bdev1", 00:20:55.109 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:20:55.110 "strip_size_kb": 0, 00:20:55.110 "state": "online", 00:20:55.110 "raid_level": "raid1", 00:20:55.110 "superblock": true, 00:20:55.110 "num_base_bdevs": 4, 00:20:55.110 "num_base_bdevs_discovered": 3, 00:20:55.110 "num_base_bdevs_operational": 3, 00:20:55.110 "base_bdevs_list": [ 00:20:55.110 { 00:20:55.110 "name": "spare", 00:20:55.110 "uuid": "a26e8cd2-d817-5922-9d7e-b45f72f3bd58", 00:20:55.110 "is_configured": true, 00:20:55.110 "data_offset": 2048, 00:20:55.110 "data_size": 63488 00:20:55.110 }, 00:20:55.110 { 00:20:55.110 "name": null, 00:20:55.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.110 "is_configured": false, 00:20:55.110 "data_offset": 2048, 00:20:55.110 "data_size": 63488 00:20:55.110 }, 00:20:55.110 { 00:20:55.110 "name": "BaseBdev3", 00:20:55.110 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:20:55.110 "is_configured": true, 00:20:55.110 "data_offset": 2048, 00:20:55.110 "data_size": 63488 00:20:55.110 }, 00:20:55.110 { 00:20:55.110 "name": "BaseBdev4", 00:20:55.110 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:20:55.110 "is_configured": true, 00:20:55.110 "data_offset": 2048, 00:20:55.110 "data_size": 63488 00:20:55.110 } 00:20:55.110 ] 00:20:55.110 }' 00:20:55.110 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:55.369 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:55.369 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:55.369 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:20:55.369 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:20:55.369 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:55.369 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:55.370 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:55.370 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:55.370 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:55.370 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.370 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:55.630 "name": "raid_bdev1", 00:20:55.630 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:20:55.630 "strip_size_kb": 0, 00:20:55.630 "state": "online", 00:20:55.630 "raid_level": "raid1", 00:20:55.630 "superblock": true, 00:20:55.630 "num_base_bdevs": 4, 00:20:55.630 "num_base_bdevs_discovered": 3, 00:20:55.630 "num_base_bdevs_operational": 3, 00:20:55.630 "base_bdevs_list": [ 00:20:55.630 { 00:20:55.630 "name": "spare", 00:20:55.630 "uuid": "a26e8cd2-d817-5922-9d7e-b45f72f3bd58", 00:20:55.630 "is_configured": true, 00:20:55.630 "data_offset": 2048, 00:20:55.630 "data_size": 63488 00:20:55.630 }, 00:20:55.630 { 00:20:55.630 "name": null, 00:20:55.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.630 "is_configured": false, 00:20:55.630 "data_offset": 2048, 00:20:55.630 "data_size": 63488 00:20:55.630 }, 00:20:55.630 { 00:20:55.630 "name": "BaseBdev3", 00:20:55.630 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:20:55.630 "is_configured": true, 00:20:55.630 "data_offset": 2048, 00:20:55.630 "data_size": 63488 00:20:55.630 }, 00:20:55.630 { 00:20:55.630 "name": "BaseBdev4", 00:20:55.630 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:20:55.630 "is_configured": true, 00:20:55.630 "data_offset": 2048, 00:20:55.630 "data_size": 63488 00:20:55.630 } 00:20:55.630 ] 00:20:55.630 }' 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.630 06:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.896 06:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:55.896 "name": "raid_bdev1", 00:20:55.896 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:20:55.896 "strip_size_kb": 0, 00:20:55.896 "state": "online", 00:20:55.896 "raid_level": "raid1", 00:20:55.896 "superblock": true, 00:20:55.896 "num_base_bdevs": 4, 00:20:55.896 "num_base_bdevs_discovered": 3, 00:20:55.896 "num_base_bdevs_operational": 3, 00:20:55.896 "base_bdevs_list": [ 00:20:55.896 { 00:20:55.896 "name": "spare", 00:20:55.896 "uuid": "a26e8cd2-d817-5922-9d7e-b45f72f3bd58", 00:20:55.896 "is_configured": true, 00:20:55.896 "data_offset": 2048, 00:20:55.896 "data_size": 63488 00:20:55.896 }, 00:20:55.896 { 00:20:55.896 "name": null, 00:20:55.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.896 "is_configured": false, 00:20:55.896 "data_offset": 2048, 00:20:55.896 "data_size": 63488 00:20:55.896 }, 00:20:55.896 { 00:20:55.896 "name": "BaseBdev3", 00:20:55.896 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:20:55.896 "is_configured": true, 00:20:55.896 "data_offset": 2048, 00:20:55.896 "data_size": 63488 00:20:55.896 }, 00:20:55.896 { 00:20:55.896 "name": "BaseBdev4", 00:20:55.896 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:20:55.896 "is_configured": true, 00:20:55.896 "data_offset": 2048, 00:20:55.896 "data_size": 63488 00:20:55.896 } 00:20:55.896 ] 00:20:55.896 }' 00:20:55.896 06:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:55.896 06:52:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:56.473 06:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:56.731 [2024-08-14 06:52:23.873630] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:56.731 [2024-08-14 06:52:23.873674] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:56.731 00:20:56.731 Latency(us) 00:20:56.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.731 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:56.731 raid_bdev1 : 11.46 97.34 292.02 0.00 0.00 14179.73 380.98 121799.66 00:20:56.731 =================================================================================================================== 00:20:56.731 Total : 97.34 292.02 0.00 0.00 14179.73 380.98 121799.66 00:20:56.731 0 00:20:56.731 [2024-08-14 06:52:23.949650] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.731 [2024-08-14 06:52:23.949711] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.731 [2024-08-14 06:52:23.949856] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.731 [2024-08-14 06:52:23.949869] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:20:56.731 06:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.731 06:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:20:56.989 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:20:56.989 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:20:56.989 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:20:56.989 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:56.989 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:56.989 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:56.989 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:56.989 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:56.989 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:56.989 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:56.989 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:56.989 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:56.989 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:57.247 /dev/nbd0 00:20:57.247 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:57.247 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:57.247 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:20:57.247 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:20:57.247 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:57.247 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:57.247 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:20:57.247 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:20:57.247 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:57.247 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:57.247 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:57.247 1+0 records in 00:20:57.247 1+0 records out 00:20:57.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536417 s, 7.6 MB/s 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # continue 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:20:57.506 /dev/nbd1 00:20:57.506 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:57.765 1+0 records in 00:20:57.765 1+0 records out 00:20:57.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375999 s, 10.9 MB/s 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:57.765 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:57.766 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:57.766 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:57.766 06:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:58.024 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:20:58.283 /dev/nbd1 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:58.283 1+0 records in 00:20:58.283 1+0 records out 00:20:58.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529983 s, 7.7 MB/s 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:58.283 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:58.542 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:58.542 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:58.542 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:58.542 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:58.542 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:58.542 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:58.542 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:20:58.542 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:58.542 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:58.542 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:58.542 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:58.542 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:58.542 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:58.542 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:58.542 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:58.800 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:58.800 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:58.800 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:58.800 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:58.800 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:58.800 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:58.800 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:20:58.800 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:58.800 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:20:58.800 06:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:59.058 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:59.318 [2024-08-14 06:52:26.479789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:59.318 [2024-08-14 06:52:26.479873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.318 [2024-08-14 06:52:26.479901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:59.318 [2024-08-14 06:52:26.479912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.318 [2024-08-14 06:52:26.482406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.318 [2024-08-14 06:52:26.482451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:59.318 [2024-08-14 06:52:26.482588] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:59.318 [2024-08-14 06:52:26.482640] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:59.318 [2024-08-14 06:52:26.482795] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:59.318 [2024-08-14 06:52:26.482893] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:59.318 spare 00:20:59.318 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:59.318 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:59.318 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:59.318 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:59.318 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:59.318 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:59.318 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:59.318 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:59.318 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:59.318 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:59.318 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.318 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.577 [2024-08-14 06:52:26.582806] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:20:59.577 [2024-08-14 06:52:26.582947] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:59.577 [2024-08-14 06:52:26.583382] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000337b0 00:20:59.577 [2024-08-14 06:52:26.583616] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:20:59.577 [2024-08-14 06:52:26.583672] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:20:59.577 [2024-08-14 06:52:26.583883] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.577 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:59.577 "name": "raid_bdev1", 00:20:59.577 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:20:59.577 "strip_size_kb": 0, 00:20:59.577 "state": "online", 00:20:59.577 "raid_level": "raid1", 00:20:59.577 "superblock": true, 00:20:59.577 "num_base_bdevs": 4, 00:20:59.577 "num_base_bdevs_discovered": 3, 00:20:59.577 "num_base_bdevs_operational": 3, 00:20:59.577 "base_bdevs_list": [ 00:20:59.577 { 00:20:59.577 "name": "spare", 00:20:59.577 "uuid": "a26e8cd2-d817-5922-9d7e-b45f72f3bd58", 00:20:59.577 "is_configured": true, 00:20:59.577 "data_offset": 2048, 00:20:59.577 "data_size": 63488 00:20:59.577 }, 00:20:59.577 { 00:20:59.577 "name": null, 00:20:59.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.577 "is_configured": false, 00:20:59.577 "data_offset": 2048, 00:20:59.577 "data_size": 63488 00:20:59.577 }, 00:20:59.577 { 00:20:59.577 "name": "BaseBdev3", 00:20:59.577 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:20:59.577 "is_configured": true, 00:20:59.577 "data_offset": 2048, 00:20:59.577 "data_size": 63488 00:20:59.577 }, 00:20:59.577 { 00:20:59.577 "name": "BaseBdev4", 00:20:59.577 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:20:59.577 "is_configured": true, 00:20:59.577 "data_offset": 2048, 00:20:59.577 "data_size": 63488 00:20:59.577 } 00:20:59.577 ] 00:20:59.577 }' 00:20:59.577 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:59.577 06:52:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:00.515 06:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:00.515 06:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:00.515 06:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:00.515 06:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:00.515 06:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:00.515 06:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.515 06:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.515 06:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:00.515 "name": "raid_bdev1", 00:21:00.515 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:21:00.515 "strip_size_kb": 0, 00:21:00.515 "state": "online", 00:21:00.515 "raid_level": "raid1", 00:21:00.515 "superblock": true, 00:21:00.515 "num_base_bdevs": 4, 00:21:00.515 "num_base_bdevs_discovered": 3, 00:21:00.515 "num_base_bdevs_operational": 3, 00:21:00.515 "base_bdevs_list": [ 00:21:00.515 { 00:21:00.515 "name": "spare", 00:21:00.515 "uuid": "a26e8cd2-d817-5922-9d7e-b45f72f3bd58", 00:21:00.515 "is_configured": true, 00:21:00.515 "data_offset": 2048, 00:21:00.515 "data_size": 63488 00:21:00.515 }, 00:21:00.515 { 00:21:00.515 "name": null, 00:21:00.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.515 "is_configured": false, 00:21:00.515 "data_offset": 2048, 00:21:00.515 "data_size": 63488 00:21:00.515 }, 00:21:00.515 { 00:21:00.515 "name": "BaseBdev3", 00:21:00.515 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:21:00.515 "is_configured": true, 00:21:00.515 "data_offset": 2048, 00:21:00.515 "data_size": 63488 00:21:00.515 }, 00:21:00.515 { 00:21:00.515 "name": "BaseBdev4", 00:21:00.515 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:21:00.515 "is_configured": true, 00:21:00.515 "data_offset": 2048, 00:21:00.515 "data_size": 63488 00:21:00.515 } 00:21:00.515 ] 00:21:00.515 }' 00:21:00.515 06:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:00.515 06:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:00.515 06:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:00.515 06:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:00.515 06:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.515 06:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:00.774 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.774 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:01.034 [2024-08-14 06:52:28.270000] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:01.293 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:01.293 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:01.293 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:01.293 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:01.293 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:01.293 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:01.293 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:01.293 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:01.293 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:01.293 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:01.293 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.293 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.552 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:01.552 "name": "raid_bdev1", 00:21:01.552 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:21:01.552 "strip_size_kb": 0, 00:21:01.552 "state": "online", 00:21:01.552 "raid_level": "raid1", 00:21:01.553 "superblock": true, 00:21:01.553 "num_base_bdevs": 4, 00:21:01.553 "num_base_bdevs_discovered": 2, 00:21:01.553 "num_base_bdevs_operational": 2, 00:21:01.553 "base_bdevs_list": [ 00:21:01.553 { 00:21:01.553 "name": null, 00:21:01.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.553 "is_configured": false, 00:21:01.553 "data_offset": 2048, 00:21:01.553 "data_size": 63488 00:21:01.553 }, 00:21:01.553 { 00:21:01.553 "name": null, 00:21:01.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.553 "is_configured": false, 00:21:01.553 "data_offset": 2048, 00:21:01.553 "data_size": 63488 00:21:01.553 }, 00:21:01.553 { 00:21:01.553 "name": "BaseBdev3", 00:21:01.553 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:21:01.553 "is_configured": true, 00:21:01.553 "data_offset": 2048, 00:21:01.553 "data_size": 63488 00:21:01.553 }, 00:21:01.553 { 00:21:01.553 "name": "BaseBdev4", 00:21:01.553 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:21:01.553 "is_configured": true, 00:21:01.553 "data_offset": 2048, 00:21:01.553 "data_size": 63488 00:21:01.553 } 00:21:01.553 ] 00:21:01.553 }' 00:21:01.553 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:01.553 06:52:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:02.147 06:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:02.427 [2024-08-14 06:52:29.442029] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:02.427 [2024-08-14 06:52:29.442377] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:02.427 [2024-08-14 06:52:29.442450] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:02.427 [2024-08-14 06:52:29.442586] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:02.427 [2024-08-14 06:52:29.446494] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033880 00:21:02.427 [2024-08-14 06:52:29.448782] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:02.427 06:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:21:03.366 06:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:03.366 06:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:03.366 06:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:03.366 06:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:03.366 06:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:03.366 06:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.366 06:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.624 06:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:03.624 "name": "raid_bdev1", 00:21:03.624 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:21:03.624 "strip_size_kb": 0, 00:21:03.624 "state": "online", 00:21:03.624 "raid_level": "raid1", 00:21:03.624 "superblock": true, 00:21:03.624 "num_base_bdevs": 4, 00:21:03.624 "num_base_bdevs_discovered": 3, 00:21:03.624 "num_base_bdevs_operational": 3, 00:21:03.624 "process": { 00:21:03.624 "type": "rebuild", 00:21:03.624 "target": "spare", 00:21:03.624 "progress": { 00:21:03.624 "blocks": 24576, 00:21:03.624 "percent": 38 00:21:03.624 } 00:21:03.624 }, 00:21:03.624 "base_bdevs_list": [ 00:21:03.624 { 00:21:03.624 "name": "spare", 00:21:03.624 "uuid": "a26e8cd2-d817-5922-9d7e-b45f72f3bd58", 00:21:03.624 "is_configured": true, 00:21:03.624 "data_offset": 2048, 00:21:03.624 "data_size": 63488 00:21:03.624 }, 00:21:03.624 { 00:21:03.624 "name": null, 00:21:03.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.624 "is_configured": false, 00:21:03.624 "data_offset": 2048, 00:21:03.624 "data_size": 63488 00:21:03.624 }, 00:21:03.624 { 00:21:03.624 "name": "BaseBdev3", 00:21:03.624 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:21:03.624 "is_configured": true, 00:21:03.624 "data_offset": 2048, 00:21:03.624 "data_size": 63488 00:21:03.624 }, 00:21:03.624 { 00:21:03.624 "name": "BaseBdev4", 00:21:03.624 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:21:03.624 "is_configured": true, 00:21:03.624 "data_offset": 2048, 00:21:03.624 "data_size": 63488 00:21:03.624 } 00:21:03.624 ] 00:21:03.624 }' 00:21:03.624 06:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:03.624 06:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:03.624 06:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:03.882 06:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.882 06:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:03.882 [2024-08-14 06:52:31.114124] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:04.142 [2024-08-14 06:52:31.156859] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:04.142 [2024-08-14 06:52:31.156966] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.142 [2024-08-14 06:52:31.156990] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:04.142 [2024-08-14 06:52:31.156999] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:04.142 06:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:04.142 06:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:04.142 06:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:04.142 06:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:04.142 06:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:04.142 06:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:04.142 06:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:04.142 06:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:04.142 06:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:04.142 06:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:04.142 06:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.142 06:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.401 06:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:04.401 "name": "raid_bdev1", 00:21:04.401 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:21:04.401 "strip_size_kb": 0, 00:21:04.401 "state": "online", 00:21:04.401 "raid_level": "raid1", 00:21:04.401 "superblock": true, 00:21:04.401 "num_base_bdevs": 4, 00:21:04.401 "num_base_bdevs_discovered": 2, 00:21:04.401 "num_base_bdevs_operational": 2, 00:21:04.401 "base_bdevs_list": [ 00:21:04.401 { 00:21:04.401 "name": null, 00:21:04.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.401 "is_configured": false, 00:21:04.401 "data_offset": 2048, 00:21:04.401 "data_size": 63488 00:21:04.401 }, 00:21:04.401 { 00:21:04.401 "name": null, 00:21:04.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.401 "is_configured": false, 00:21:04.401 "data_offset": 2048, 00:21:04.401 "data_size": 63488 00:21:04.401 }, 00:21:04.401 { 00:21:04.401 "name": "BaseBdev3", 00:21:04.401 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:21:04.401 "is_configured": true, 00:21:04.401 "data_offset": 2048, 00:21:04.401 "data_size": 63488 00:21:04.401 }, 00:21:04.401 { 00:21:04.401 "name": "BaseBdev4", 00:21:04.401 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:21:04.401 "is_configured": true, 00:21:04.401 "data_offset": 2048, 00:21:04.401 "data_size": 63488 00:21:04.401 } 00:21:04.401 ] 00:21:04.401 }' 00:21:04.401 06:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:04.401 06:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.968 06:52:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:05.226 [2024-08-14 06:52:32.307891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:05.226 [2024-08-14 06:52:32.308081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.226 [2024-08-14 06:52:32.308133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:05.226 [2024-08-14 06:52:32.308187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.226 [2024-08-14 06:52:32.308790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.226 [2024-08-14 06:52:32.308871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:05.226 [2024-08-14 06:52:32.309018] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:05.226 [2024-08-14 06:52:32.309080] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:05.226 [2024-08-14 06:52:32.309134] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:05.226 [2024-08-14 06:52:32.309242] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.226 [2024-08-14 06:52:32.313328] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033950 00:21:05.226 spare 00:21:05.226 [2024-08-14 06:52:32.315727] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:05.226 06:52:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:21:06.163 06:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.163 06:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:06.163 06:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:06.163 06:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:06.163 06:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:06.163 06:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.163 06:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.729 06:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:06.729 "name": "raid_bdev1", 00:21:06.730 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:21:06.730 "strip_size_kb": 0, 00:21:06.730 "state": "online", 00:21:06.730 "raid_level": "raid1", 00:21:06.730 "superblock": true, 00:21:06.730 "num_base_bdevs": 4, 00:21:06.730 "num_base_bdevs_discovered": 3, 00:21:06.730 "num_base_bdevs_operational": 3, 00:21:06.730 "process": { 00:21:06.730 "type": "rebuild", 00:21:06.730 "target": "spare", 00:21:06.730 "progress": { 00:21:06.730 "blocks": 26624, 00:21:06.730 "percent": 41 00:21:06.730 } 00:21:06.730 }, 00:21:06.730 "base_bdevs_list": [ 00:21:06.730 { 00:21:06.730 "name": "spare", 00:21:06.730 "uuid": "a26e8cd2-d817-5922-9d7e-b45f72f3bd58", 00:21:06.730 "is_configured": true, 00:21:06.730 "data_offset": 2048, 00:21:06.730 "data_size": 63488 00:21:06.730 }, 00:21:06.730 { 00:21:06.730 "name": null, 00:21:06.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.730 "is_configured": false, 00:21:06.730 "data_offset": 2048, 00:21:06.730 "data_size": 63488 00:21:06.730 }, 00:21:06.730 { 00:21:06.730 "name": "BaseBdev3", 00:21:06.730 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:21:06.730 "is_configured": true, 00:21:06.730 "data_offset": 2048, 00:21:06.730 "data_size": 63488 00:21:06.730 }, 00:21:06.730 { 00:21:06.730 "name": "BaseBdev4", 00:21:06.730 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:21:06.730 "is_configured": true, 00:21:06.730 "data_offset": 2048, 00:21:06.730 "data_size": 63488 00:21:06.730 } 00:21:06.730 ] 00:21:06.730 }' 00:21:06.730 06:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:06.730 06:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.730 06:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:06.730 06:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.730 06:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:06.989 [2024-08-14 06:52:33.999472] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:06.989 [2024-08-14 06:52:34.023882] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:06.989 [2024-08-14 06:52:34.024014] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.989 [2024-08-14 06:52:34.024034] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:06.989 [2024-08-14 06:52:34.024046] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:06.989 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:06.989 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:06.989 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:06.989 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:06.989 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:06.989 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:06.989 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.989 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.989 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.989 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.989 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.989 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.247 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:07.247 "name": "raid_bdev1", 00:21:07.247 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:21:07.247 "strip_size_kb": 0, 00:21:07.247 "state": "online", 00:21:07.247 "raid_level": "raid1", 00:21:07.247 "superblock": true, 00:21:07.247 "num_base_bdevs": 4, 00:21:07.247 "num_base_bdevs_discovered": 2, 00:21:07.247 "num_base_bdevs_operational": 2, 00:21:07.247 "base_bdevs_list": [ 00:21:07.247 { 00:21:07.247 "name": null, 00:21:07.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.247 "is_configured": false, 00:21:07.247 "data_offset": 2048, 00:21:07.248 "data_size": 63488 00:21:07.248 }, 00:21:07.248 { 00:21:07.248 "name": null, 00:21:07.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.248 "is_configured": false, 00:21:07.248 "data_offset": 2048, 00:21:07.248 "data_size": 63488 00:21:07.248 }, 00:21:07.248 { 00:21:07.248 "name": "BaseBdev3", 00:21:07.248 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:21:07.248 "is_configured": true, 00:21:07.248 "data_offset": 2048, 00:21:07.248 "data_size": 63488 00:21:07.248 }, 00:21:07.248 { 00:21:07.248 "name": "BaseBdev4", 00:21:07.248 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:21:07.248 "is_configured": true, 00:21:07.248 "data_offset": 2048, 00:21:07.248 "data_size": 63488 00:21:07.248 } 00:21:07.248 ] 00:21:07.248 }' 00:21:07.248 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:07.248 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.815 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:07.815 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:07.815 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:07.815 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:07.815 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:07.815 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.815 06:52:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.074 06:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:08.074 "name": "raid_bdev1", 00:21:08.074 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:21:08.074 "strip_size_kb": 0, 00:21:08.074 "state": "online", 00:21:08.074 "raid_level": "raid1", 00:21:08.074 "superblock": true, 00:21:08.074 "num_base_bdevs": 4, 00:21:08.074 "num_base_bdevs_discovered": 2, 00:21:08.074 "num_base_bdevs_operational": 2, 00:21:08.074 "base_bdevs_list": [ 00:21:08.074 { 00:21:08.074 "name": null, 00:21:08.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.074 "is_configured": false, 00:21:08.074 "data_offset": 2048, 00:21:08.074 "data_size": 63488 00:21:08.074 }, 00:21:08.074 { 00:21:08.074 "name": null, 00:21:08.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.074 "is_configured": false, 00:21:08.074 "data_offset": 2048, 00:21:08.074 "data_size": 63488 00:21:08.074 }, 00:21:08.074 { 00:21:08.074 "name": "BaseBdev3", 00:21:08.074 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:21:08.074 "is_configured": true, 00:21:08.074 "data_offset": 2048, 00:21:08.074 "data_size": 63488 00:21:08.074 }, 00:21:08.074 { 00:21:08.074 "name": "BaseBdev4", 00:21:08.074 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:21:08.074 "is_configured": true, 00:21:08.074 "data_offset": 2048, 00:21:08.074 "data_size": 63488 00:21:08.074 } 00:21:08.074 ] 00:21:08.074 }' 00:21:08.074 06:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:08.074 06:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:08.074 06:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:08.074 06:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:08.074 06:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:08.338 06:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:08.596 [2024-08-14 06:52:35.778027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:08.596 [2024-08-14 06:52:35.778274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.596 [2024-08-14 06:52:35.778311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:21:08.596 [2024-08-14 06:52:35.778330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.596 [2024-08-14 06:52:35.778862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.596 [2024-08-14 06:52:35.778894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:08.596 [2024-08-14 06:52:35.778987] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:08.596 [2024-08-14 06:52:35.779007] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:08.596 [2024-08-14 06:52:35.779016] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:08.596 BaseBdev1 00:21:08.596 06:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:21:09.971 06:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:09.971 06:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:09.971 06:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:09.971 06:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:09.971 06:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:09.971 06:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:09.971 06:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.971 06:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.971 06:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.971 06:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.971 06:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.971 06:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.971 06:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.971 "name": "raid_bdev1", 00:21:09.972 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:21:09.972 "strip_size_kb": 0, 00:21:09.972 "state": "online", 00:21:09.972 "raid_level": "raid1", 00:21:09.972 "superblock": true, 00:21:09.972 "num_base_bdevs": 4, 00:21:09.972 "num_base_bdevs_discovered": 2, 00:21:09.972 "num_base_bdevs_operational": 2, 00:21:09.972 "base_bdevs_list": [ 00:21:09.972 { 00:21:09.972 "name": null, 00:21:09.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.972 "is_configured": false, 00:21:09.972 "data_offset": 2048, 00:21:09.972 "data_size": 63488 00:21:09.972 }, 00:21:09.972 { 00:21:09.972 "name": null, 00:21:09.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.972 "is_configured": false, 00:21:09.972 "data_offset": 2048, 00:21:09.972 "data_size": 63488 00:21:09.972 }, 00:21:09.972 { 00:21:09.972 "name": "BaseBdev3", 00:21:09.972 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:21:09.972 "is_configured": true, 00:21:09.972 "data_offset": 2048, 00:21:09.972 "data_size": 63488 00:21:09.972 }, 00:21:09.972 { 00:21:09.972 "name": "BaseBdev4", 00:21:09.972 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:21:09.972 "is_configured": true, 00:21:09.972 "data_offset": 2048, 00:21:09.972 "data_size": 63488 00:21:09.972 } 00:21:09.972 ] 00:21:09.972 }' 00:21:09.972 06:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.972 06:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:10.539 06:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.539 06:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:10.539 06:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:10.539 06:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:10.539 06:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:10.539 06:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.539 06:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.798 06:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:10.798 "name": "raid_bdev1", 00:21:10.798 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:21:10.798 "strip_size_kb": 0, 00:21:10.798 "state": "online", 00:21:10.798 "raid_level": "raid1", 00:21:10.798 "superblock": true, 00:21:10.798 "num_base_bdevs": 4, 00:21:10.798 "num_base_bdevs_discovered": 2, 00:21:10.798 "num_base_bdevs_operational": 2, 00:21:10.798 "base_bdevs_list": [ 00:21:10.798 { 00:21:10.798 "name": null, 00:21:10.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.798 "is_configured": false, 00:21:10.798 "data_offset": 2048, 00:21:10.798 "data_size": 63488 00:21:10.798 }, 00:21:10.798 { 00:21:10.798 "name": null, 00:21:10.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.798 "is_configured": false, 00:21:10.798 "data_offset": 2048, 00:21:10.798 "data_size": 63488 00:21:10.798 }, 00:21:10.798 { 00:21:10.798 "name": "BaseBdev3", 00:21:10.798 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:21:10.798 "is_configured": true, 00:21:10.798 "data_offset": 2048, 00:21:10.798 "data_size": 63488 00:21:10.798 }, 00:21:10.798 { 00:21:10.798 "name": "BaseBdev4", 00:21:10.798 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:21:10.798 "is_configured": true, 00:21:10.798 "data_offset": 2048, 00:21:10.798 "data_size": 63488 00:21:10.798 } 00:21:10.798 ] 00:21:10.798 }' 00:21:10.798 06:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:10.798 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:10.798 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:11.056 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:11.056 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:11.056 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@646 -- # local es=0 00:21:11.056 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:11.056 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:11.056 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:21:11.056 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:11.056 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:21:11.056 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:11.056 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:21:11.056 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:11.056 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:11.056 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:11.315 [2024-08-14 06:52:38.322023] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:11.315 [2024-08-14 06:52:38.322327] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:11.315 [2024-08-14 06:52:38.322404] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:11.315 request: 00:21:11.315 { 00:21:11.315 "base_bdev": "BaseBdev1", 00:21:11.315 "raid_bdev": "raid_bdev1", 00:21:11.315 "method": "bdev_raid_add_base_bdev", 00:21:11.315 "req_id": 1 00:21:11.315 } 00:21:11.315 Got JSON-RPC error response 00:21:11.315 response: 00:21:11.315 { 00:21:11.315 "code": -22, 00:21:11.315 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:11.315 } 00:21:11.315 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@649 -- # es=1 00:21:11.315 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:21:11.315 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:21:11.315 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:21:11.315 06:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:21:12.250 06:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:12.250 06:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:12.250 06:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:12.250 06:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:12.250 06:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:12.250 06:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:12.250 06:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:12.250 06:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:12.250 06:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:12.250 06:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:12.250 06:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.250 06:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.508 06:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:12.508 "name": "raid_bdev1", 00:21:12.508 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:21:12.508 "strip_size_kb": 0, 00:21:12.508 "state": "online", 00:21:12.508 "raid_level": "raid1", 00:21:12.508 "superblock": true, 00:21:12.508 "num_base_bdevs": 4, 00:21:12.508 "num_base_bdevs_discovered": 2, 00:21:12.508 "num_base_bdevs_operational": 2, 00:21:12.508 "base_bdevs_list": [ 00:21:12.508 { 00:21:12.508 "name": null, 00:21:12.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.508 "is_configured": false, 00:21:12.508 "data_offset": 2048, 00:21:12.508 "data_size": 63488 00:21:12.508 }, 00:21:12.508 { 00:21:12.508 "name": null, 00:21:12.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.508 "is_configured": false, 00:21:12.508 "data_offset": 2048, 00:21:12.508 "data_size": 63488 00:21:12.508 }, 00:21:12.508 { 00:21:12.508 "name": "BaseBdev3", 00:21:12.508 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:21:12.508 "is_configured": true, 00:21:12.508 "data_offset": 2048, 00:21:12.508 "data_size": 63488 00:21:12.508 }, 00:21:12.508 { 00:21:12.508 "name": "BaseBdev4", 00:21:12.508 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:21:12.508 "is_configured": true, 00:21:12.508 "data_offset": 2048, 00:21:12.508 "data_size": 63488 00:21:12.508 } 00:21:12.508 ] 00:21:12.508 }' 00:21:12.508 06:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:12.508 06:52:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.075 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:13.075 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:13.075 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:13.075 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:13.075 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:13.075 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.075 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:13.643 "name": "raid_bdev1", 00:21:13.643 "uuid": "5de3ae8d-ae38-42c1-be22-004ffaec3fd2", 00:21:13.643 "strip_size_kb": 0, 00:21:13.643 "state": "online", 00:21:13.643 "raid_level": "raid1", 00:21:13.643 "superblock": true, 00:21:13.643 "num_base_bdevs": 4, 00:21:13.643 "num_base_bdevs_discovered": 2, 00:21:13.643 "num_base_bdevs_operational": 2, 00:21:13.643 "base_bdevs_list": [ 00:21:13.643 { 00:21:13.643 "name": null, 00:21:13.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.643 "is_configured": false, 00:21:13.643 "data_offset": 2048, 00:21:13.643 "data_size": 63488 00:21:13.643 }, 00:21:13.643 { 00:21:13.643 "name": null, 00:21:13.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.643 "is_configured": false, 00:21:13.643 "data_offset": 2048, 00:21:13.643 "data_size": 63488 00:21:13.643 }, 00:21:13.643 { 00:21:13.643 "name": "BaseBdev3", 00:21:13.643 "uuid": "1e8a7e60-38a2-5592-b10a-86ef2bead97e", 00:21:13.643 "is_configured": true, 00:21:13.643 "data_offset": 2048, 00:21:13.643 "data_size": 63488 00:21:13.643 }, 00:21:13.643 { 00:21:13.643 "name": "BaseBdev4", 00:21:13.643 "uuid": "6a92baca-7a45-5535-b0b8-7fd4aac58fe8", 00:21:13.643 "is_configured": true, 00:21:13.643 "data_offset": 2048, 00:21:13.643 "data_size": 63488 00:21:13.643 } 00:21:13.643 ] 00:21:13.643 }' 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 96990 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@946 -- # '[' -z 96990 ']' 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # kill -0 96990 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # uname 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96990 00:21:13.643 killing process with pid 96990 00:21:13.643 Received shutdown signal, test time was about 28.306277 seconds 00:21:13.643 00:21:13.643 Latency(us) 00:21:13.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.643 =================================================================================================================== 00:21:13.643 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96990' 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@965 -- # kill 96990 00:21:13.643 [2024-08-14 06:52:40.754440] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:13.643 06:52:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # wait 96990 00:21:13.643 [2024-08-14 06:52:40.754625] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:13.643 [2024-08-14 06:52:40.754710] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:13.643 [2024-08-14 06:52:40.754725] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:21:13.643 [2024-08-14 06:52:40.803858] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:13.902 06:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:21:13.902 00:21:13.902 real 0m33.946s 00:21:13.902 user 0m54.930s 00:21:13.902 sys 0m4.216s 00:21:13.902 06:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:13.902 06:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.902 ************************************ 00:21:13.902 END TEST raid_rebuild_test_sb_io 00:21:13.902 ************************************ 00:21:13.902 06:52:41 bdev_raid -- bdev/bdev_raid.sh@964 -- # for n in {3..4} 00:21:13.902 06:52:41 bdev_raid -- bdev/bdev_raid.sh@965 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:21:13.902 06:52:41 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:21:13.902 06:52:41 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:13.902 06:52:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:13.902 ************************************ 00:21:13.902 START TEST raid5f_state_function_test 00:21:13.902 ************************************ 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 3 false 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:21:13.902 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:13.903 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:13.903 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:21:13.903 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:21:13.903 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=97855 00:21:13.903 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:13.903 Process raid pid: 97855 00:21:13.903 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 97855' 00:21:13.903 06:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 97855 /var/tmp/spdk-raid.sock 00:21:13.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:13.903 06:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 97855 ']' 00:21:13.903 06:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:13.903 06:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:13.903 06:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:13.903 06:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:13.903 06:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.162 [2024-08-14 06:52:41.227409] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:21:14.162 [2024-08-14 06:52:41.227547] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.162 [2024-08-14 06:52:41.378967] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.420 [2024-08-14 06:52:41.439550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.420 [2024-08-14 06:52:41.490208] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:14.420 [2024-08-14 06:52:41.490254] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:14.986 06:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:14.986 06:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:21:14.986 06:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:15.245 [2024-08-14 06:52:42.426747] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:15.245 [2024-08-14 06:52:42.426934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:15.245 [2024-08-14 06:52:42.426961] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:15.245 [2024-08-14 06:52:42.426974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:15.245 [2024-08-14 06:52:42.426987] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:15.245 [2024-08-14 06:52:42.426997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:15.245 06:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:15.245 06:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:15.245 06:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:15.245 06:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:15.245 06:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:15.245 06:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:15.245 06:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:15.245 06:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:15.245 06:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:15.245 06:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:15.245 06:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.245 06:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.504 06:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:15.504 "name": "Existed_Raid", 00:21:15.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.504 "strip_size_kb": 64, 00:21:15.504 "state": "configuring", 00:21:15.504 "raid_level": "raid5f", 00:21:15.504 "superblock": false, 00:21:15.504 "num_base_bdevs": 3, 00:21:15.504 "num_base_bdevs_discovered": 0, 00:21:15.504 "num_base_bdevs_operational": 3, 00:21:15.504 "base_bdevs_list": [ 00:21:15.504 { 00:21:15.504 "name": "BaseBdev1", 00:21:15.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.504 "is_configured": false, 00:21:15.504 "data_offset": 0, 00:21:15.504 "data_size": 0 00:21:15.504 }, 00:21:15.504 { 00:21:15.504 "name": "BaseBdev2", 00:21:15.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.504 "is_configured": false, 00:21:15.504 "data_offset": 0, 00:21:15.504 "data_size": 0 00:21:15.504 }, 00:21:15.504 { 00:21:15.504 "name": "BaseBdev3", 00:21:15.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.504 "is_configured": false, 00:21:15.504 "data_offset": 0, 00:21:15.504 "data_size": 0 00:21:15.504 } 00:21:15.504 ] 00:21:15.504 }' 00:21:15.504 06:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:15.504 06:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.069 06:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:16.327 [2024-08-14 06:52:43.562062] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:16.327 [2024-08-14 06:52:43.562117] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:21:16.586 06:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:16.843 [2024-08-14 06:52:43.861650] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:16.843 [2024-08-14 06:52:43.861718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:16.843 [2024-08-14 06:52:43.861732] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:16.843 [2024-08-14 06:52:43.861743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:16.843 [2024-08-14 06:52:43.861753] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:16.843 [2024-08-14 06:52:43.861761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:16.843 06:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:17.102 [2024-08-14 06:52:44.126931] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.102 BaseBdev1 00:21:17.102 06:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:17.102 06:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:17.102 06:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:17.102 06:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:17.102 06:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:17.102 06:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:17.102 06:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:17.359 06:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:17.617 [ 00:21:17.617 { 00:21:17.617 "name": "BaseBdev1", 00:21:17.617 "aliases": [ 00:21:17.617 "6314c79e-6aeb-402e-86f1-c4df820f679b" 00:21:17.617 ], 00:21:17.617 "product_name": "Malloc disk", 00:21:17.617 "block_size": 512, 00:21:17.617 "num_blocks": 65536, 00:21:17.617 "uuid": "6314c79e-6aeb-402e-86f1-c4df820f679b", 00:21:17.617 "assigned_rate_limits": { 00:21:17.617 "rw_ios_per_sec": 0, 00:21:17.617 "rw_mbytes_per_sec": 0, 00:21:17.617 "r_mbytes_per_sec": 0, 00:21:17.617 "w_mbytes_per_sec": 0 00:21:17.617 }, 00:21:17.617 "claimed": true, 00:21:17.617 "claim_type": "exclusive_write", 00:21:17.617 "zoned": false, 00:21:17.617 "supported_io_types": { 00:21:17.617 "read": true, 00:21:17.617 "write": true, 00:21:17.617 "unmap": true, 00:21:17.617 "flush": true, 00:21:17.617 "reset": true, 00:21:17.617 "nvme_admin": false, 00:21:17.617 "nvme_io": false, 00:21:17.617 "nvme_io_md": false, 00:21:17.617 "write_zeroes": true, 00:21:17.618 "zcopy": true, 00:21:17.618 "get_zone_info": false, 00:21:17.618 "zone_management": false, 00:21:17.618 "zone_append": false, 00:21:17.618 "compare": false, 00:21:17.618 "compare_and_write": false, 00:21:17.618 "abort": true, 00:21:17.618 "seek_hole": false, 00:21:17.618 "seek_data": false, 00:21:17.618 "copy": true, 00:21:17.618 "nvme_iov_md": false 00:21:17.618 }, 00:21:17.618 "memory_domains": [ 00:21:17.618 { 00:21:17.618 "dma_device_id": "system", 00:21:17.618 "dma_device_type": 1 00:21:17.618 }, 00:21:17.618 { 00:21:17.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.618 "dma_device_type": 2 00:21:17.618 } 00:21:17.618 ], 00:21:17.618 "driver_specific": {} 00:21:17.618 } 00:21:17.618 ] 00:21:17.618 06:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:17.618 06:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:17.618 06:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:17.618 06:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:17.618 06:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:17.618 06:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:17.618 06:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:17.618 06:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:17.618 06:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:17.618 06:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:17.618 06:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:17.618 06:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.618 06:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.876 06:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:17.876 "name": "Existed_Raid", 00:21:17.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.876 "strip_size_kb": 64, 00:21:17.876 "state": "configuring", 00:21:17.876 "raid_level": "raid5f", 00:21:17.876 "superblock": false, 00:21:17.876 "num_base_bdevs": 3, 00:21:17.876 "num_base_bdevs_discovered": 1, 00:21:17.876 "num_base_bdevs_operational": 3, 00:21:17.876 "base_bdevs_list": [ 00:21:17.876 { 00:21:17.876 "name": "BaseBdev1", 00:21:17.876 "uuid": "6314c79e-6aeb-402e-86f1-c4df820f679b", 00:21:17.876 "is_configured": true, 00:21:17.876 "data_offset": 0, 00:21:17.876 "data_size": 65536 00:21:17.876 }, 00:21:17.876 { 00:21:17.876 "name": "BaseBdev2", 00:21:17.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.876 "is_configured": false, 00:21:17.876 "data_offset": 0, 00:21:17.876 "data_size": 0 00:21:17.876 }, 00:21:17.876 { 00:21:17.876 "name": "BaseBdev3", 00:21:17.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.876 "is_configured": false, 00:21:17.876 "data_offset": 0, 00:21:17.876 "data_size": 0 00:21:17.876 } 00:21:17.876 ] 00:21:17.876 }' 00:21:17.876 06:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:17.876 06:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.480 06:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:18.738 [2024-08-14 06:52:45.824813] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:18.738 [2024-08-14 06:52:45.825010] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:21:18.738 06:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:18.996 [2024-08-14 06:52:46.076457] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:18.997 [2024-08-14 06:52:46.078790] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:18.997 [2024-08-14 06:52:46.078906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:18.997 [2024-08-14 06:52:46.078951] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:18.997 [2024-08-14 06:52:46.078985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:18.997 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:18.997 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:18.997 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:18.997 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:18.997 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:18.997 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:18.997 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:18.997 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:18.997 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.997 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.997 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.997 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.997 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.997 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.256 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.256 "name": "Existed_Raid", 00:21:19.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.256 "strip_size_kb": 64, 00:21:19.256 "state": "configuring", 00:21:19.256 "raid_level": "raid5f", 00:21:19.256 "superblock": false, 00:21:19.256 "num_base_bdevs": 3, 00:21:19.256 "num_base_bdevs_discovered": 1, 00:21:19.256 "num_base_bdevs_operational": 3, 00:21:19.256 "base_bdevs_list": [ 00:21:19.256 { 00:21:19.256 "name": "BaseBdev1", 00:21:19.256 "uuid": "6314c79e-6aeb-402e-86f1-c4df820f679b", 00:21:19.256 "is_configured": true, 00:21:19.256 "data_offset": 0, 00:21:19.256 "data_size": 65536 00:21:19.256 }, 00:21:19.256 { 00:21:19.256 "name": "BaseBdev2", 00:21:19.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.256 "is_configured": false, 00:21:19.256 "data_offset": 0, 00:21:19.256 "data_size": 0 00:21:19.256 }, 00:21:19.256 { 00:21:19.256 "name": "BaseBdev3", 00:21:19.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.256 "is_configured": false, 00:21:19.256 "data_offset": 0, 00:21:19.256 "data_size": 0 00:21:19.256 } 00:21:19.256 ] 00:21:19.256 }' 00:21:19.256 06:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.256 06:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.824 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:20.083 [2024-08-14 06:52:47.237935] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:20.083 BaseBdev2 00:21:20.083 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:20.083 06:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:20.083 06:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:20.083 06:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:20.083 06:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:20.083 06:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:20.083 06:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:20.341 06:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:20.601 [ 00:21:20.601 { 00:21:20.601 "name": "BaseBdev2", 00:21:20.601 "aliases": [ 00:21:20.601 "981bf983-f25e-4414-a94d-b51516ee9b97" 00:21:20.601 ], 00:21:20.601 "product_name": "Malloc disk", 00:21:20.601 "block_size": 512, 00:21:20.601 "num_blocks": 65536, 00:21:20.601 "uuid": "981bf983-f25e-4414-a94d-b51516ee9b97", 00:21:20.601 "assigned_rate_limits": { 00:21:20.601 "rw_ios_per_sec": 0, 00:21:20.601 "rw_mbytes_per_sec": 0, 00:21:20.601 "r_mbytes_per_sec": 0, 00:21:20.601 "w_mbytes_per_sec": 0 00:21:20.601 }, 00:21:20.601 "claimed": true, 00:21:20.601 "claim_type": "exclusive_write", 00:21:20.601 "zoned": false, 00:21:20.601 "supported_io_types": { 00:21:20.601 "read": true, 00:21:20.601 "write": true, 00:21:20.601 "unmap": true, 00:21:20.601 "flush": true, 00:21:20.601 "reset": true, 00:21:20.601 "nvme_admin": false, 00:21:20.601 "nvme_io": false, 00:21:20.601 "nvme_io_md": false, 00:21:20.601 "write_zeroes": true, 00:21:20.601 "zcopy": true, 00:21:20.601 "get_zone_info": false, 00:21:20.601 "zone_management": false, 00:21:20.601 "zone_append": false, 00:21:20.601 "compare": false, 00:21:20.601 "compare_and_write": false, 00:21:20.601 "abort": true, 00:21:20.601 "seek_hole": false, 00:21:20.601 "seek_data": false, 00:21:20.601 "copy": true, 00:21:20.601 "nvme_iov_md": false 00:21:20.601 }, 00:21:20.601 "memory_domains": [ 00:21:20.601 { 00:21:20.601 "dma_device_id": "system", 00:21:20.601 "dma_device_type": 1 00:21:20.601 }, 00:21:20.601 { 00:21:20.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.601 "dma_device_type": 2 00:21:20.601 } 00:21:20.601 ], 00:21:20.601 "driver_specific": {} 00:21:20.601 } 00:21:20.601 ] 00:21:20.601 06:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:20.601 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:20.601 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:20.601 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:20.601 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:20.601 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:20.601 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:20.601 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:20.601 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:20.601 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.601 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.601 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.601 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:20.601 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.601 06:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:20.860 06:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:20.860 "name": "Existed_Raid", 00:21:20.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.860 "strip_size_kb": 64, 00:21:20.860 "state": "configuring", 00:21:20.860 "raid_level": "raid5f", 00:21:20.860 "superblock": false, 00:21:20.860 "num_base_bdevs": 3, 00:21:20.860 "num_base_bdevs_discovered": 2, 00:21:20.860 "num_base_bdevs_operational": 3, 00:21:20.860 "base_bdevs_list": [ 00:21:20.860 { 00:21:20.860 "name": "BaseBdev1", 00:21:20.860 "uuid": "6314c79e-6aeb-402e-86f1-c4df820f679b", 00:21:20.860 "is_configured": true, 00:21:20.860 "data_offset": 0, 00:21:20.860 "data_size": 65536 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "name": "BaseBdev2", 00:21:20.860 "uuid": "981bf983-f25e-4414-a94d-b51516ee9b97", 00:21:20.860 "is_configured": true, 00:21:20.860 "data_offset": 0, 00:21:20.860 "data_size": 65536 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "name": "BaseBdev3", 00:21:20.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.860 "is_configured": false, 00:21:20.860 "data_offset": 0, 00:21:20.860 "data_size": 0 00:21:20.860 } 00:21:20.860 ] 00:21:20.860 }' 00:21:20.860 06:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:20.860 06:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.796 06:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:22.054 [2024-08-14 06:52:49.060896] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:22.054 [2024-08-14 06:52:49.060996] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:21:22.054 [2024-08-14 06:52:49.061009] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:22.054 [2024-08-14 06:52:49.061427] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:21:22.054 [2024-08-14 06:52:49.061981] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:21:22.054 [2024-08-14 06:52:49.062013] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:21:22.054 [2024-08-14 06:52:49.062289] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.054 BaseBdev3 00:21:22.054 06:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:22.054 06:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:22.054 06:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:22.054 06:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:22.054 06:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:22.054 06:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:22.054 06:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:22.312 06:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:22.572 [ 00:21:22.572 { 00:21:22.572 "name": "BaseBdev3", 00:21:22.572 "aliases": [ 00:21:22.572 "46f241d0-5526-4ad3-8ba6-713f4627d82f" 00:21:22.572 ], 00:21:22.572 "product_name": "Malloc disk", 00:21:22.572 "block_size": 512, 00:21:22.572 "num_blocks": 65536, 00:21:22.572 "uuid": "46f241d0-5526-4ad3-8ba6-713f4627d82f", 00:21:22.572 "assigned_rate_limits": { 00:21:22.572 "rw_ios_per_sec": 0, 00:21:22.572 "rw_mbytes_per_sec": 0, 00:21:22.572 "r_mbytes_per_sec": 0, 00:21:22.572 "w_mbytes_per_sec": 0 00:21:22.572 }, 00:21:22.572 "claimed": true, 00:21:22.572 "claim_type": "exclusive_write", 00:21:22.572 "zoned": false, 00:21:22.572 "supported_io_types": { 00:21:22.572 "read": true, 00:21:22.572 "write": true, 00:21:22.572 "unmap": true, 00:21:22.572 "flush": true, 00:21:22.572 "reset": true, 00:21:22.572 "nvme_admin": false, 00:21:22.572 "nvme_io": false, 00:21:22.572 "nvme_io_md": false, 00:21:22.572 "write_zeroes": true, 00:21:22.572 "zcopy": true, 00:21:22.572 "get_zone_info": false, 00:21:22.572 "zone_management": false, 00:21:22.572 "zone_append": false, 00:21:22.572 "compare": false, 00:21:22.572 "compare_and_write": false, 00:21:22.572 "abort": true, 00:21:22.572 "seek_hole": false, 00:21:22.572 "seek_data": false, 00:21:22.572 "copy": true, 00:21:22.572 "nvme_iov_md": false 00:21:22.572 }, 00:21:22.572 "memory_domains": [ 00:21:22.572 { 00:21:22.572 "dma_device_id": "system", 00:21:22.572 "dma_device_type": 1 00:21:22.572 }, 00:21:22.572 { 00:21:22.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.572 "dma_device_type": 2 00:21:22.572 } 00:21:22.572 ], 00:21:22.572 "driver_specific": {} 00:21:22.572 } 00:21:22.572 ] 00:21:22.572 06:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:22.572 06:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:22.572 06:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:22.572 06:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:22.572 06:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:22.572 06:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:22.572 06:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:22.572 06:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:22.572 06:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:22.572 06:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:22.572 06:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:22.572 06:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:22.572 06:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:22.572 06:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.572 06:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:22.830 06:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:22.830 "name": "Existed_Raid", 00:21:22.830 "uuid": "65197e06-d5ac-47d9-8987-11bd1b6b7dec", 00:21:22.830 "strip_size_kb": 64, 00:21:22.830 "state": "online", 00:21:22.830 "raid_level": "raid5f", 00:21:22.830 "superblock": false, 00:21:22.830 "num_base_bdevs": 3, 00:21:22.830 "num_base_bdevs_discovered": 3, 00:21:22.830 "num_base_bdevs_operational": 3, 00:21:22.830 "base_bdevs_list": [ 00:21:22.830 { 00:21:22.830 "name": "BaseBdev1", 00:21:22.830 "uuid": "6314c79e-6aeb-402e-86f1-c4df820f679b", 00:21:22.830 "is_configured": true, 00:21:22.830 "data_offset": 0, 00:21:22.830 "data_size": 65536 00:21:22.830 }, 00:21:22.830 { 00:21:22.830 "name": "BaseBdev2", 00:21:22.830 "uuid": "981bf983-f25e-4414-a94d-b51516ee9b97", 00:21:22.830 "is_configured": true, 00:21:22.830 "data_offset": 0, 00:21:22.830 "data_size": 65536 00:21:22.830 }, 00:21:22.830 { 00:21:22.830 "name": "BaseBdev3", 00:21:22.830 "uuid": "46f241d0-5526-4ad3-8ba6-713f4627d82f", 00:21:22.830 "is_configured": true, 00:21:22.830 "data_offset": 0, 00:21:22.830 "data_size": 65536 00:21:22.830 } 00:21:22.830 ] 00:21:22.830 }' 00:21:22.830 06:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:22.830 06:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.806 06:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:23.806 06:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:23.806 06:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:23.806 06:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:23.806 06:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:23.806 06:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:23.806 06:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:23.806 06:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:23.806 [2024-08-14 06:52:50.938968] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:23.806 06:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:23.806 "name": "Existed_Raid", 00:21:23.806 "aliases": [ 00:21:23.806 "65197e06-d5ac-47d9-8987-11bd1b6b7dec" 00:21:23.806 ], 00:21:23.806 "product_name": "Raid Volume", 00:21:23.806 "block_size": 512, 00:21:23.806 "num_blocks": 131072, 00:21:23.806 "uuid": "65197e06-d5ac-47d9-8987-11bd1b6b7dec", 00:21:23.806 "assigned_rate_limits": { 00:21:23.806 "rw_ios_per_sec": 0, 00:21:23.806 "rw_mbytes_per_sec": 0, 00:21:23.806 "r_mbytes_per_sec": 0, 00:21:23.806 "w_mbytes_per_sec": 0 00:21:23.806 }, 00:21:23.806 "claimed": false, 00:21:23.806 "zoned": false, 00:21:23.806 "supported_io_types": { 00:21:23.806 "read": true, 00:21:23.806 "write": true, 00:21:23.806 "unmap": false, 00:21:23.806 "flush": false, 00:21:23.806 "reset": true, 00:21:23.806 "nvme_admin": false, 00:21:23.806 "nvme_io": false, 00:21:23.806 "nvme_io_md": false, 00:21:23.806 "write_zeroes": true, 00:21:23.806 "zcopy": false, 00:21:23.806 "get_zone_info": false, 00:21:23.806 "zone_management": false, 00:21:23.806 "zone_append": false, 00:21:23.806 "compare": false, 00:21:23.806 "compare_and_write": false, 00:21:23.806 "abort": false, 00:21:23.806 "seek_hole": false, 00:21:23.806 "seek_data": false, 00:21:23.806 "copy": false, 00:21:23.806 "nvme_iov_md": false 00:21:23.806 }, 00:21:23.806 "driver_specific": { 00:21:23.806 "raid": { 00:21:23.806 "uuid": "65197e06-d5ac-47d9-8987-11bd1b6b7dec", 00:21:23.806 "strip_size_kb": 64, 00:21:23.806 "state": "online", 00:21:23.806 "raid_level": "raid5f", 00:21:23.806 "superblock": false, 00:21:23.806 "num_base_bdevs": 3, 00:21:23.806 "num_base_bdevs_discovered": 3, 00:21:23.806 "num_base_bdevs_operational": 3, 00:21:23.806 "base_bdevs_list": [ 00:21:23.806 { 00:21:23.806 "name": "BaseBdev1", 00:21:23.806 "uuid": "6314c79e-6aeb-402e-86f1-c4df820f679b", 00:21:23.806 "is_configured": true, 00:21:23.806 "data_offset": 0, 00:21:23.806 "data_size": 65536 00:21:23.806 }, 00:21:23.806 { 00:21:23.806 "name": "BaseBdev2", 00:21:23.806 "uuid": "981bf983-f25e-4414-a94d-b51516ee9b97", 00:21:23.806 "is_configured": true, 00:21:23.806 "data_offset": 0, 00:21:23.806 "data_size": 65536 00:21:23.806 }, 00:21:23.806 { 00:21:23.806 "name": "BaseBdev3", 00:21:23.806 "uuid": "46f241d0-5526-4ad3-8ba6-713f4627d82f", 00:21:23.806 "is_configured": true, 00:21:23.806 "data_offset": 0, 00:21:23.806 "data_size": 65536 00:21:23.806 } 00:21:23.806 ] 00:21:23.806 } 00:21:23.806 } 00:21:23.806 }' 00:21:23.806 06:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:23.806 06:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:23.806 BaseBdev2 00:21:23.806 BaseBdev3' 00:21:23.806 06:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:23.806 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:23.806 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:24.065 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:24.065 "name": "BaseBdev1", 00:21:24.065 "aliases": [ 00:21:24.065 "6314c79e-6aeb-402e-86f1-c4df820f679b" 00:21:24.065 ], 00:21:24.065 "product_name": "Malloc disk", 00:21:24.065 "block_size": 512, 00:21:24.065 "num_blocks": 65536, 00:21:24.065 "uuid": "6314c79e-6aeb-402e-86f1-c4df820f679b", 00:21:24.065 "assigned_rate_limits": { 00:21:24.065 "rw_ios_per_sec": 0, 00:21:24.065 "rw_mbytes_per_sec": 0, 00:21:24.065 "r_mbytes_per_sec": 0, 00:21:24.065 "w_mbytes_per_sec": 0 00:21:24.065 }, 00:21:24.065 "claimed": true, 00:21:24.065 "claim_type": "exclusive_write", 00:21:24.065 "zoned": false, 00:21:24.065 "supported_io_types": { 00:21:24.065 "read": true, 00:21:24.065 "write": true, 00:21:24.065 "unmap": true, 00:21:24.065 "flush": true, 00:21:24.065 "reset": true, 00:21:24.065 "nvme_admin": false, 00:21:24.065 "nvme_io": false, 00:21:24.065 "nvme_io_md": false, 00:21:24.065 "write_zeroes": true, 00:21:24.065 "zcopy": true, 00:21:24.065 "get_zone_info": false, 00:21:24.065 "zone_management": false, 00:21:24.065 "zone_append": false, 00:21:24.065 "compare": false, 00:21:24.065 "compare_and_write": false, 00:21:24.065 "abort": true, 00:21:24.065 "seek_hole": false, 00:21:24.065 "seek_data": false, 00:21:24.065 "copy": true, 00:21:24.065 "nvme_iov_md": false 00:21:24.065 }, 00:21:24.065 "memory_domains": [ 00:21:24.065 { 00:21:24.065 "dma_device_id": "system", 00:21:24.065 "dma_device_type": 1 00:21:24.065 }, 00:21:24.065 { 00:21:24.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.065 "dma_device_type": 2 00:21:24.065 } 00:21:24.065 ], 00:21:24.065 "driver_specific": {} 00:21:24.065 }' 00:21:24.065 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:24.065 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:24.324 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:24.324 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:24.324 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:24.324 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:24.324 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:24.324 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:24.324 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:24.324 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:24.324 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:24.583 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:24.583 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:24.583 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:24.583 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:24.842 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:24.842 "name": "BaseBdev2", 00:21:24.842 "aliases": [ 00:21:24.842 "981bf983-f25e-4414-a94d-b51516ee9b97" 00:21:24.842 ], 00:21:24.842 "product_name": "Malloc disk", 00:21:24.842 "block_size": 512, 00:21:24.842 "num_blocks": 65536, 00:21:24.842 "uuid": "981bf983-f25e-4414-a94d-b51516ee9b97", 00:21:24.842 "assigned_rate_limits": { 00:21:24.842 "rw_ios_per_sec": 0, 00:21:24.842 "rw_mbytes_per_sec": 0, 00:21:24.842 "r_mbytes_per_sec": 0, 00:21:24.842 "w_mbytes_per_sec": 0 00:21:24.842 }, 00:21:24.842 "claimed": true, 00:21:24.842 "claim_type": "exclusive_write", 00:21:24.842 "zoned": false, 00:21:24.842 "supported_io_types": { 00:21:24.842 "read": true, 00:21:24.842 "write": true, 00:21:24.842 "unmap": true, 00:21:24.842 "flush": true, 00:21:24.842 "reset": true, 00:21:24.842 "nvme_admin": false, 00:21:24.842 "nvme_io": false, 00:21:24.842 "nvme_io_md": false, 00:21:24.842 "write_zeroes": true, 00:21:24.842 "zcopy": true, 00:21:24.842 "get_zone_info": false, 00:21:24.842 "zone_management": false, 00:21:24.842 "zone_append": false, 00:21:24.842 "compare": false, 00:21:24.842 "compare_and_write": false, 00:21:24.842 "abort": true, 00:21:24.842 "seek_hole": false, 00:21:24.842 "seek_data": false, 00:21:24.842 "copy": true, 00:21:24.842 "nvme_iov_md": false 00:21:24.842 }, 00:21:24.842 "memory_domains": [ 00:21:24.842 { 00:21:24.842 "dma_device_id": "system", 00:21:24.842 "dma_device_type": 1 00:21:24.842 }, 00:21:24.842 { 00:21:24.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.842 "dma_device_type": 2 00:21:24.842 } 00:21:24.842 ], 00:21:24.842 "driver_specific": {} 00:21:24.842 }' 00:21:24.842 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:24.842 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:24.842 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:24.842 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:24.842 06:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:24.842 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:24.842 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:24.842 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.102 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:25.102 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.102 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.102 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:25.102 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:25.102 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:25.102 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:25.361 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:25.361 "name": "BaseBdev3", 00:21:25.361 "aliases": [ 00:21:25.361 "46f241d0-5526-4ad3-8ba6-713f4627d82f" 00:21:25.361 ], 00:21:25.361 "product_name": "Malloc disk", 00:21:25.361 "block_size": 512, 00:21:25.361 "num_blocks": 65536, 00:21:25.361 "uuid": "46f241d0-5526-4ad3-8ba6-713f4627d82f", 00:21:25.361 "assigned_rate_limits": { 00:21:25.361 "rw_ios_per_sec": 0, 00:21:25.361 "rw_mbytes_per_sec": 0, 00:21:25.361 "r_mbytes_per_sec": 0, 00:21:25.361 "w_mbytes_per_sec": 0 00:21:25.361 }, 00:21:25.361 "claimed": true, 00:21:25.361 "claim_type": "exclusive_write", 00:21:25.361 "zoned": false, 00:21:25.361 "supported_io_types": { 00:21:25.361 "read": true, 00:21:25.361 "write": true, 00:21:25.361 "unmap": true, 00:21:25.361 "flush": true, 00:21:25.361 "reset": true, 00:21:25.361 "nvme_admin": false, 00:21:25.361 "nvme_io": false, 00:21:25.361 "nvme_io_md": false, 00:21:25.361 "write_zeroes": true, 00:21:25.361 "zcopy": true, 00:21:25.361 "get_zone_info": false, 00:21:25.361 "zone_management": false, 00:21:25.361 "zone_append": false, 00:21:25.361 "compare": false, 00:21:25.361 "compare_and_write": false, 00:21:25.361 "abort": true, 00:21:25.361 "seek_hole": false, 00:21:25.361 "seek_data": false, 00:21:25.361 "copy": true, 00:21:25.361 "nvme_iov_md": false 00:21:25.361 }, 00:21:25.361 "memory_domains": [ 00:21:25.361 { 00:21:25.361 "dma_device_id": "system", 00:21:25.361 "dma_device_type": 1 00:21:25.361 }, 00:21:25.361 { 00:21:25.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.361 "dma_device_type": 2 00:21:25.361 } 00:21:25.361 ], 00:21:25.361 "driver_specific": {} 00:21:25.361 }' 00:21:25.361 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.361 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.361 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:25.361 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:25.620 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:25.620 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:25.620 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.620 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.620 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:25.620 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.620 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.620 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:25.620 06:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:26.189 [2024-08-14 06:52:53.138128] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:26.189 "name": "Existed_Raid", 00:21:26.189 "uuid": "65197e06-d5ac-47d9-8987-11bd1b6b7dec", 00:21:26.189 "strip_size_kb": 64, 00:21:26.189 "state": "online", 00:21:26.189 "raid_level": "raid5f", 00:21:26.189 "superblock": false, 00:21:26.189 "num_base_bdevs": 3, 00:21:26.189 "num_base_bdevs_discovered": 2, 00:21:26.189 "num_base_bdevs_operational": 2, 00:21:26.189 "base_bdevs_list": [ 00:21:26.189 { 00:21:26.189 "name": null, 00:21:26.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.189 "is_configured": false, 00:21:26.189 "data_offset": 0, 00:21:26.189 "data_size": 65536 00:21:26.189 }, 00:21:26.189 { 00:21:26.189 "name": "BaseBdev2", 00:21:26.189 "uuid": "981bf983-f25e-4414-a94d-b51516ee9b97", 00:21:26.189 "is_configured": true, 00:21:26.189 "data_offset": 0, 00:21:26.189 "data_size": 65536 00:21:26.189 }, 00:21:26.189 { 00:21:26.189 "name": "BaseBdev3", 00:21:26.189 "uuid": "46f241d0-5526-4ad3-8ba6-713f4627d82f", 00:21:26.189 "is_configured": true, 00:21:26.189 "data_offset": 0, 00:21:26.189 "data_size": 65536 00:21:26.189 } 00:21:26.189 ] 00:21:26.189 }' 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:26.189 06:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.756 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:26.756 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:26.756 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.756 06:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:27.014 06:52:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:27.014 06:52:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:27.014 06:52:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:27.272 [2024-08-14 06:52:54.473578] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:27.272 [2024-08-14 06:52:54.473702] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:27.272 [2024-08-14 06:52:54.485649] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:27.272 06:52:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:27.272 06:52:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:27.272 06:52:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:27.272 06:52:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.530 06:52:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:27.530 06:52:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:27.530 06:52:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:27.789 [2024-08-14 06:52:55.040907] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:27.789 [2024-08-14 06:52:55.040987] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:21:28.048 06:52:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:28.048 06:52:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:28.048 06:52:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.048 06:52:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:28.310 06:52:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:28.310 06:52:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:28.310 06:52:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:21:28.310 06:52:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:28.310 06:52:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:28.310 06:52:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:28.310 BaseBdev2 00:21:28.310 06:52:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:28.310 06:52:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:28.310 06:52:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:28.568 06:52:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:28.568 06:52:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:28.568 06:52:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:28.568 06:52:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:28.568 06:52:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:28.826 [ 00:21:28.826 { 00:21:28.826 "name": "BaseBdev2", 00:21:28.826 "aliases": [ 00:21:28.826 "e799b707-f9bd-4c8a-9663-3101fb465869" 00:21:28.826 ], 00:21:28.826 "product_name": "Malloc disk", 00:21:28.826 "block_size": 512, 00:21:28.826 "num_blocks": 65536, 00:21:28.826 "uuid": "e799b707-f9bd-4c8a-9663-3101fb465869", 00:21:28.826 "assigned_rate_limits": { 00:21:28.826 "rw_ios_per_sec": 0, 00:21:28.826 "rw_mbytes_per_sec": 0, 00:21:28.826 "r_mbytes_per_sec": 0, 00:21:28.826 "w_mbytes_per_sec": 0 00:21:28.826 }, 00:21:28.826 "claimed": false, 00:21:28.826 "zoned": false, 00:21:28.826 "supported_io_types": { 00:21:28.826 "read": true, 00:21:28.826 "write": true, 00:21:28.826 "unmap": true, 00:21:28.826 "flush": true, 00:21:28.826 "reset": true, 00:21:28.826 "nvme_admin": false, 00:21:28.826 "nvme_io": false, 00:21:28.826 "nvme_io_md": false, 00:21:28.826 "write_zeroes": true, 00:21:28.826 "zcopy": true, 00:21:28.826 "get_zone_info": false, 00:21:28.826 "zone_management": false, 00:21:28.826 "zone_append": false, 00:21:28.826 "compare": false, 00:21:28.826 "compare_and_write": false, 00:21:28.826 "abort": true, 00:21:28.826 "seek_hole": false, 00:21:28.826 "seek_data": false, 00:21:28.826 "copy": true, 00:21:28.826 "nvme_iov_md": false 00:21:28.826 }, 00:21:28.826 "memory_domains": [ 00:21:28.826 { 00:21:28.826 "dma_device_id": "system", 00:21:28.826 "dma_device_type": 1 00:21:28.826 }, 00:21:28.826 { 00:21:28.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.826 "dma_device_type": 2 00:21:28.826 } 00:21:28.826 ], 00:21:28.826 "driver_specific": {} 00:21:28.826 } 00:21:28.826 ] 00:21:28.826 06:52:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:28.826 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:28.826 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:28.826 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:29.085 BaseBdev3 00:21:29.085 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:29.085 06:52:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:29.085 06:52:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:29.085 06:52:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:29.085 06:52:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:29.085 06:52:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:29.085 06:52:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:29.344 06:52:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:29.604 [ 00:21:29.604 { 00:21:29.604 "name": "BaseBdev3", 00:21:29.604 "aliases": [ 00:21:29.604 "27190aa2-cc86-49c7-bc6a-e572f967e2a4" 00:21:29.604 ], 00:21:29.604 "product_name": "Malloc disk", 00:21:29.604 "block_size": 512, 00:21:29.604 "num_blocks": 65536, 00:21:29.604 "uuid": "27190aa2-cc86-49c7-bc6a-e572f967e2a4", 00:21:29.604 "assigned_rate_limits": { 00:21:29.604 "rw_ios_per_sec": 0, 00:21:29.604 "rw_mbytes_per_sec": 0, 00:21:29.604 "r_mbytes_per_sec": 0, 00:21:29.604 "w_mbytes_per_sec": 0 00:21:29.604 }, 00:21:29.604 "claimed": false, 00:21:29.604 "zoned": false, 00:21:29.604 "supported_io_types": { 00:21:29.604 "read": true, 00:21:29.604 "write": true, 00:21:29.604 "unmap": true, 00:21:29.604 "flush": true, 00:21:29.604 "reset": true, 00:21:29.604 "nvme_admin": false, 00:21:29.604 "nvme_io": false, 00:21:29.604 "nvme_io_md": false, 00:21:29.604 "write_zeroes": true, 00:21:29.604 "zcopy": true, 00:21:29.604 "get_zone_info": false, 00:21:29.604 "zone_management": false, 00:21:29.604 "zone_append": false, 00:21:29.604 "compare": false, 00:21:29.604 "compare_and_write": false, 00:21:29.604 "abort": true, 00:21:29.604 "seek_hole": false, 00:21:29.604 "seek_data": false, 00:21:29.604 "copy": true, 00:21:29.604 "nvme_iov_md": false 00:21:29.604 }, 00:21:29.604 "memory_domains": [ 00:21:29.604 { 00:21:29.604 "dma_device_id": "system", 00:21:29.604 "dma_device_type": 1 00:21:29.604 }, 00:21:29.604 { 00:21:29.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.604 "dma_device_type": 2 00:21:29.604 } 00:21:29.604 ], 00:21:29.604 "driver_specific": {} 00:21:29.604 } 00:21:29.604 ] 00:21:29.604 06:52:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:29.604 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:29.604 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:29.604 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:29.864 [2024-08-14 06:52:56.956671] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:29.864 [2024-08-14 06:52:56.956738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:29.864 [2024-08-14 06:52:56.956766] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:29.864 [2024-08-14 06:52:56.958879] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:29.864 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:29.864 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:29.864 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:29.864 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:29.864 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:29.864 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:29.864 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:29.864 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:29.864 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:29.864 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:29.864 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.864 06:52:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.123 06:52:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:30.123 "name": "Existed_Raid", 00:21:30.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.123 "strip_size_kb": 64, 00:21:30.123 "state": "configuring", 00:21:30.123 "raid_level": "raid5f", 00:21:30.123 "superblock": false, 00:21:30.123 "num_base_bdevs": 3, 00:21:30.123 "num_base_bdevs_discovered": 2, 00:21:30.123 "num_base_bdevs_operational": 3, 00:21:30.123 "base_bdevs_list": [ 00:21:30.123 { 00:21:30.123 "name": "BaseBdev1", 00:21:30.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.123 "is_configured": false, 00:21:30.123 "data_offset": 0, 00:21:30.123 "data_size": 0 00:21:30.123 }, 00:21:30.123 { 00:21:30.123 "name": "BaseBdev2", 00:21:30.123 "uuid": "e799b707-f9bd-4c8a-9663-3101fb465869", 00:21:30.123 "is_configured": true, 00:21:30.123 "data_offset": 0, 00:21:30.123 "data_size": 65536 00:21:30.123 }, 00:21:30.123 { 00:21:30.123 "name": "BaseBdev3", 00:21:30.123 "uuid": "27190aa2-cc86-49c7-bc6a-e572f967e2a4", 00:21:30.123 "is_configured": true, 00:21:30.123 "data_offset": 0, 00:21:30.123 "data_size": 65536 00:21:30.123 } 00:21:30.123 ] 00:21:30.123 }' 00:21:30.123 06:52:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:30.123 06:52:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.690 06:52:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:30.949 [2024-08-14 06:52:58.114770] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:30.949 06:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:30.949 06:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:30.949 06:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:30.949 06:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:30.949 06:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:30.949 06:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:30.949 06:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:30.949 06:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:30.949 06:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:30.949 06:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:30.950 06:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.950 06:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.209 06:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:31.209 "name": "Existed_Raid", 00:21:31.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.209 "strip_size_kb": 64, 00:21:31.209 "state": "configuring", 00:21:31.209 "raid_level": "raid5f", 00:21:31.209 "superblock": false, 00:21:31.209 "num_base_bdevs": 3, 00:21:31.209 "num_base_bdevs_discovered": 1, 00:21:31.209 "num_base_bdevs_operational": 3, 00:21:31.209 "base_bdevs_list": [ 00:21:31.209 { 00:21:31.209 "name": "BaseBdev1", 00:21:31.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.209 "is_configured": false, 00:21:31.209 "data_offset": 0, 00:21:31.209 "data_size": 0 00:21:31.209 }, 00:21:31.209 { 00:21:31.209 "name": null, 00:21:31.209 "uuid": "e799b707-f9bd-4c8a-9663-3101fb465869", 00:21:31.209 "is_configured": false, 00:21:31.209 "data_offset": 0, 00:21:31.209 "data_size": 65536 00:21:31.209 }, 00:21:31.209 { 00:21:31.209 "name": "BaseBdev3", 00:21:31.209 "uuid": "27190aa2-cc86-49c7-bc6a-e572f967e2a4", 00:21:31.209 "is_configured": true, 00:21:31.209 "data_offset": 0, 00:21:31.209 "data_size": 65536 00:21:31.209 } 00:21:31.209 ] 00:21:31.209 }' 00:21:31.209 06:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:31.209 06:52:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.777 06:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.777 06:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:32.035 06:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:32.035 06:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:32.294 [2024-08-14 06:52:59.508365] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:32.294 BaseBdev1 00:21:32.294 06:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:32.294 06:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:32.294 06:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:32.294 06:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:32.294 06:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:32.294 06:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:32.294 06:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:32.553 06:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:32.811 [ 00:21:32.811 { 00:21:32.811 "name": "BaseBdev1", 00:21:32.811 "aliases": [ 00:21:32.811 "b9afa0fd-5862-4ee9-b1a3-cac1ba71b62b" 00:21:32.811 ], 00:21:32.811 "product_name": "Malloc disk", 00:21:32.811 "block_size": 512, 00:21:32.811 "num_blocks": 65536, 00:21:32.811 "uuid": "b9afa0fd-5862-4ee9-b1a3-cac1ba71b62b", 00:21:32.811 "assigned_rate_limits": { 00:21:32.811 "rw_ios_per_sec": 0, 00:21:32.811 "rw_mbytes_per_sec": 0, 00:21:32.811 "r_mbytes_per_sec": 0, 00:21:32.811 "w_mbytes_per_sec": 0 00:21:32.811 }, 00:21:32.811 "claimed": true, 00:21:32.812 "claim_type": "exclusive_write", 00:21:32.812 "zoned": false, 00:21:32.812 "supported_io_types": { 00:21:32.812 "read": true, 00:21:32.812 "write": true, 00:21:32.812 "unmap": true, 00:21:32.812 "flush": true, 00:21:32.812 "reset": true, 00:21:32.812 "nvme_admin": false, 00:21:32.812 "nvme_io": false, 00:21:32.812 "nvme_io_md": false, 00:21:32.812 "write_zeroes": true, 00:21:32.812 "zcopy": true, 00:21:32.812 "get_zone_info": false, 00:21:32.812 "zone_management": false, 00:21:32.812 "zone_append": false, 00:21:32.812 "compare": false, 00:21:32.812 "compare_and_write": false, 00:21:32.812 "abort": true, 00:21:32.812 "seek_hole": false, 00:21:32.812 "seek_data": false, 00:21:32.812 "copy": true, 00:21:32.812 "nvme_iov_md": false 00:21:32.812 }, 00:21:32.812 "memory_domains": [ 00:21:32.812 { 00:21:32.812 "dma_device_id": "system", 00:21:32.812 "dma_device_type": 1 00:21:32.812 }, 00:21:32.812 { 00:21:32.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.812 "dma_device_type": 2 00:21:32.812 } 00:21:32.812 ], 00:21:32.812 "driver_specific": {} 00:21:32.812 } 00:21:32.812 ] 00:21:32.812 06:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:32.812 06:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:32.812 06:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:32.812 06:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:32.812 06:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:32.812 06:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:32.812 06:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:32.812 06:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:32.812 06:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:32.812 06:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:32.812 06:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:32.812 06:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.812 06:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.070 06:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:33.070 "name": "Existed_Raid", 00:21:33.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.070 "strip_size_kb": 64, 00:21:33.070 "state": "configuring", 00:21:33.070 "raid_level": "raid5f", 00:21:33.070 "superblock": false, 00:21:33.070 "num_base_bdevs": 3, 00:21:33.070 "num_base_bdevs_discovered": 2, 00:21:33.070 "num_base_bdevs_operational": 3, 00:21:33.070 "base_bdevs_list": [ 00:21:33.070 { 00:21:33.070 "name": "BaseBdev1", 00:21:33.070 "uuid": "b9afa0fd-5862-4ee9-b1a3-cac1ba71b62b", 00:21:33.070 "is_configured": true, 00:21:33.070 "data_offset": 0, 00:21:33.070 "data_size": 65536 00:21:33.070 }, 00:21:33.070 { 00:21:33.070 "name": null, 00:21:33.070 "uuid": "e799b707-f9bd-4c8a-9663-3101fb465869", 00:21:33.070 "is_configured": false, 00:21:33.070 "data_offset": 0, 00:21:33.070 "data_size": 65536 00:21:33.070 }, 00:21:33.070 { 00:21:33.070 "name": "BaseBdev3", 00:21:33.070 "uuid": "27190aa2-cc86-49c7-bc6a-e572f967e2a4", 00:21:33.070 "is_configured": true, 00:21:33.070 "data_offset": 0, 00:21:33.070 "data_size": 65536 00:21:33.070 } 00:21:33.070 ] 00:21:33.070 }' 00:21:33.070 06:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:33.070 06:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.637 06:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:33.637 06:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.929 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:33.929 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:34.187 [2024-08-14 06:53:01.346378] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:34.187 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:34.187 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:34.187 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:34.187 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:34.187 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:34.187 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:34.187 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:34.187 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:34.187 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:34.187 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:34.187 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.187 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.446 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:34.446 "name": "Existed_Raid", 00:21:34.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.446 "strip_size_kb": 64, 00:21:34.446 "state": "configuring", 00:21:34.446 "raid_level": "raid5f", 00:21:34.446 "superblock": false, 00:21:34.446 "num_base_bdevs": 3, 00:21:34.446 "num_base_bdevs_discovered": 1, 00:21:34.446 "num_base_bdevs_operational": 3, 00:21:34.446 "base_bdevs_list": [ 00:21:34.446 { 00:21:34.446 "name": "BaseBdev1", 00:21:34.446 "uuid": "b9afa0fd-5862-4ee9-b1a3-cac1ba71b62b", 00:21:34.446 "is_configured": true, 00:21:34.446 "data_offset": 0, 00:21:34.446 "data_size": 65536 00:21:34.446 }, 00:21:34.446 { 00:21:34.446 "name": null, 00:21:34.446 "uuid": "e799b707-f9bd-4c8a-9663-3101fb465869", 00:21:34.446 "is_configured": false, 00:21:34.446 "data_offset": 0, 00:21:34.446 "data_size": 65536 00:21:34.446 }, 00:21:34.446 { 00:21:34.446 "name": null, 00:21:34.446 "uuid": "27190aa2-cc86-49c7-bc6a-e572f967e2a4", 00:21:34.446 "is_configured": false, 00:21:34.446 "data_offset": 0, 00:21:34.446 "data_size": 65536 00:21:34.446 } 00:21:34.446 ] 00:21:34.446 }' 00:21:34.446 06:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:34.446 06:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.012 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.012 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:35.270 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:35.270 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:35.529 [2024-08-14 06:53:02.715657] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:35.529 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:35.529 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:35.529 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:35.529 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:35.529 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:35.529 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:35.529 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:35.529 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:35.529 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:35.529 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:35.529 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.529 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.787 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:35.787 "name": "Existed_Raid", 00:21:35.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.787 "strip_size_kb": 64, 00:21:35.787 "state": "configuring", 00:21:35.787 "raid_level": "raid5f", 00:21:35.787 "superblock": false, 00:21:35.787 "num_base_bdevs": 3, 00:21:35.787 "num_base_bdevs_discovered": 2, 00:21:35.787 "num_base_bdevs_operational": 3, 00:21:35.787 "base_bdevs_list": [ 00:21:35.787 { 00:21:35.787 "name": "BaseBdev1", 00:21:35.787 "uuid": "b9afa0fd-5862-4ee9-b1a3-cac1ba71b62b", 00:21:35.787 "is_configured": true, 00:21:35.787 "data_offset": 0, 00:21:35.787 "data_size": 65536 00:21:35.787 }, 00:21:35.787 { 00:21:35.787 "name": null, 00:21:35.787 "uuid": "e799b707-f9bd-4c8a-9663-3101fb465869", 00:21:35.787 "is_configured": false, 00:21:35.787 "data_offset": 0, 00:21:35.787 "data_size": 65536 00:21:35.787 }, 00:21:35.787 { 00:21:35.787 "name": "BaseBdev3", 00:21:35.787 "uuid": "27190aa2-cc86-49c7-bc6a-e572f967e2a4", 00:21:35.787 "is_configured": true, 00:21:35.787 "data_offset": 0, 00:21:35.787 "data_size": 65536 00:21:35.787 } 00:21:35.787 ] 00:21:35.787 }' 00:21:35.787 06:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:35.787 06:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.353 06:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.353 06:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:36.614 06:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:36.614 06:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:36.875 [2024-08-14 06:53:04.102077] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:37.134 06:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:37.134 06:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:37.134 06:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:37.134 06:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:37.134 06:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:37.134 06:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:37.134 06:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:37.134 06:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:37.134 06:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:37.134 06:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:37.134 06:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.134 06:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.392 06:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:37.392 "name": "Existed_Raid", 00:21:37.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.392 "strip_size_kb": 64, 00:21:37.392 "state": "configuring", 00:21:37.392 "raid_level": "raid5f", 00:21:37.392 "superblock": false, 00:21:37.392 "num_base_bdevs": 3, 00:21:37.392 "num_base_bdevs_discovered": 1, 00:21:37.392 "num_base_bdevs_operational": 3, 00:21:37.392 "base_bdevs_list": [ 00:21:37.392 { 00:21:37.392 "name": null, 00:21:37.392 "uuid": "b9afa0fd-5862-4ee9-b1a3-cac1ba71b62b", 00:21:37.392 "is_configured": false, 00:21:37.392 "data_offset": 0, 00:21:37.392 "data_size": 65536 00:21:37.392 }, 00:21:37.392 { 00:21:37.392 "name": null, 00:21:37.392 "uuid": "e799b707-f9bd-4c8a-9663-3101fb465869", 00:21:37.392 "is_configured": false, 00:21:37.392 "data_offset": 0, 00:21:37.392 "data_size": 65536 00:21:37.392 }, 00:21:37.392 { 00:21:37.392 "name": "BaseBdev3", 00:21:37.392 "uuid": "27190aa2-cc86-49c7-bc6a-e572f967e2a4", 00:21:37.392 "is_configured": true, 00:21:37.392 "data_offset": 0, 00:21:37.392 "data_size": 65536 00:21:37.392 } 00:21:37.392 ] 00:21:37.392 }' 00:21:37.392 06:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:37.392 06:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.959 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:37.959 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.218 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:38.218 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:38.477 [2024-08-14 06:53:05.530941] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:38.477 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:38.477 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:38.477 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:38.477 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:38.477 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:38.477 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:38.477 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.477 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.477 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.477 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.477 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.477 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.736 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.736 "name": "Existed_Raid", 00:21:38.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.736 "strip_size_kb": 64, 00:21:38.736 "state": "configuring", 00:21:38.736 "raid_level": "raid5f", 00:21:38.736 "superblock": false, 00:21:38.736 "num_base_bdevs": 3, 00:21:38.736 "num_base_bdevs_discovered": 2, 00:21:38.736 "num_base_bdevs_operational": 3, 00:21:38.736 "base_bdevs_list": [ 00:21:38.736 { 00:21:38.736 "name": null, 00:21:38.736 "uuid": "b9afa0fd-5862-4ee9-b1a3-cac1ba71b62b", 00:21:38.736 "is_configured": false, 00:21:38.736 "data_offset": 0, 00:21:38.736 "data_size": 65536 00:21:38.736 }, 00:21:38.736 { 00:21:38.736 "name": "BaseBdev2", 00:21:38.736 "uuid": "e799b707-f9bd-4c8a-9663-3101fb465869", 00:21:38.736 "is_configured": true, 00:21:38.736 "data_offset": 0, 00:21:38.736 "data_size": 65536 00:21:38.736 }, 00:21:38.736 { 00:21:38.736 "name": "BaseBdev3", 00:21:38.736 "uuid": "27190aa2-cc86-49c7-bc6a-e572f967e2a4", 00:21:38.736 "is_configured": true, 00:21:38.736 "data_offset": 0, 00:21:38.736 "data_size": 65536 00:21:38.736 } 00:21:38.736 ] 00:21:38.736 }' 00:21:38.736 06:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.736 06:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.306 06:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.306 06:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:39.564 06:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:39.564 06:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.564 06:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:39.821 06:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b9afa0fd-5862-4ee9-b1a3-cac1ba71b62b 00:21:40.079 [2024-08-14 06:53:07.295656] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:40.079 [2024-08-14 06:53:07.295813] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:21:40.079 [2024-08-14 06:53:07.295843] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:40.079 [2024-08-14 06:53:07.296133] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:21:40.079 [2024-08-14 06:53:07.296680] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:21:40.079 [2024-08-14 06:53:07.296746] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:21:40.079 [2024-08-14 06:53:07.296997] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.079 NewBaseBdev 00:21:40.079 06:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:40.079 06:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:21:40.079 06:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:40.079 06:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:40.079 06:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:40.079 06:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:40.079 06:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:40.338 06:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:40.596 [ 00:21:40.596 { 00:21:40.596 "name": "NewBaseBdev", 00:21:40.596 "aliases": [ 00:21:40.596 "b9afa0fd-5862-4ee9-b1a3-cac1ba71b62b" 00:21:40.596 ], 00:21:40.596 "product_name": "Malloc disk", 00:21:40.596 "block_size": 512, 00:21:40.596 "num_blocks": 65536, 00:21:40.596 "uuid": "b9afa0fd-5862-4ee9-b1a3-cac1ba71b62b", 00:21:40.596 "assigned_rate_limits": { 00:21:40.596 "rw_ios_per_sec": 0, 00:21:40.597 "rw_mbytes_per_sec": 0, 00:21:40.597 "r_mbytes_per_sec": 0, 00:21:40.597 "w_mbytes_per_sec": 0 00:21:40.597 }, 00:21:40.597 "claimed": true, 00:21:40.597 "claim_type": "exclusive_write", 00:21:40.597 "zoned": false, 00:21:40.597 "supported_io_types": { 00:21:40.597 "read": true, 00:21:40.597 "write": true, 00:21:40.597 "unmap": true, 00:21:40.597 "flush": true, 00:21:40.597 "reset": true, 00:21:40.597 "nvme_admin": false, 00:21:40.597 "nvme_io": false, 00:21:40.597 "nvme_io_md": false, 00:21:40.597 "write_zeroes": true, 00:21:40.597 "zcopy": true, 00:21:40.597 "get_zone_info": false, 00:21:40.597 "zone_management": false, 00:21:40.597 "zone_append": false, 00:21:40.597 "compare": false, 00:21:40.597 "compare_and_write": false, 00:21:40.597 "abort": true, 00:21:40.597 "seek_hole": false, 00:21:40.597 "seek_data": false, 00:21:40.597 "copy": true, 00:21:40.597 "nvme_iov_md": false 00:21:40.597 }, 00:21:40.597 "memory_domains": [ 00:21:40.597 { 00:21:40.597 "dma_device_id": "system", 00:21:40.597 "dma_device_type": 1 00:21:40.597 }, 00:21:40.597 { 00:21:40.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.597 "dma_device_type": 2 00:21:40.597 } 00:21:40.597 ], 00:21:40.597 "driver_specific": {} 00:21:40.597 } 00:21:40.597 ] 00:21:40.597 06:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:40.597 06:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:40.597 06:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:40.597 06:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:40.597 06:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:40.597 06:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:40.597 06:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:40.597 06:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.597 06:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.597 06:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.597 06:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.597 06:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.597 06:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.855 06:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:40.855 "name": "Existed_Raid", 00:21:40.855 "uuid": "3640f569-8afc-4a25-b7c3-f3705cced689", 00:21:40.855 "strip_size_kb": 64, 00:21:40.855 "state": "online", 00:21:40.855 "raid_level": "raid5f", 00:21:40.855 "superblock": false, 00:21:40.855 "num_base_bdevs": 3, 00:21:40.855 "num_base_bdevs_discovered": 3, 00:21:40.855 "num_base_bdevs_operational": 3, 00:21:40.855 "base_bdevs_list": [ 00:21:40.855 { 00:21:40.855 "name": "NewBaseBdev", 00:21:40.855 "uuid": "b9afa0fd-5862-4ee9-b1a3-cac1ba71b62b", 00:21:40.855 "is_configured": true, 00:21:40.855 "data_offset": 0, 00:21:40.855 "data_size": 65536 00:21:40.855 }, 00:21:40.855 { 00:21:40.855 "name": "BaseBdev2", 00:21:40.855 "uuid": "e799b707-f9bd-4c8a-9663-3101fb465869", 00:21:40.855 "is_configured": true, 00:21:40.855 "data_offset": 0, 00:21:40.855 "data_size": 65536 00:21:40.855 }, 00:21:40.855 { 00:21:40.855 "name": "BaseBdev3", 00:21:40.855 "uuid": "27190aa2-cc86-49c7-bc6a-e572f967e2a4", 00:21:40.855 "is_configured": true, 00:21:40.855 "data_offset": 0, 00:21:40.855 "data_size": 65536 00:21:40.855 } 00:21:40.855 ] 00:21:40.855 }' 00:21:40.855 06:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:40.855 06:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.790 06:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:41.790 06:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:41.790 06:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:41.790 06:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:41.790 06:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:41.790 06:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:41.790 06:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:41.790 06:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:41.790 [2024-08-14 06:53:08.982314] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:41.790 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:41.790 "name": "Existed_Raid", 00:21:41.790 "aliases": [ 00:21:41.790 "3640f569-8afc-4a25-b7c3-f3705cced689" 00:21:41.790 ], 00:21:41.790 "product_name": "Raid Volume", 00:21:41.790 "block_size": 512, 00:21:41.790 "num_blocks": 131072, 00:21:41.790 "uuid": "3640f569-8afc-4a25-b7c3-f3705cced689", 00:21:41.790 "assigned_rate_limits": { 00:21:41.790 "rw_ios_per_sec": 0, 00:21:41.790 "rw_mbytes_per_sec": 0, 00:21:41.790 "r_mbytes_per_sec": 0, 00:21:41.790 "w_mbytes_per_sec": 0 00:21:41.790 }, 00:21:41.790 "claimed": false, 00:21:41.790 "zoned": false, 00:21:41.790 "supported_io_types": { 00:21:41.790 "read": true, 00:21:41.790 "write": true, 00:21:41.790 "unmap": false, 00:21:41.790 "flush": false, 00:21:41.790 "reset": true, 00:21:41.790 "nvme_admin": false, 00:21:41.790 "nvme_io": false, 00:21:41.790 "nvme_io_md": false, 00:21:41.790 "write_zeroes": true, 00:21:41.790 "zcopy": false, 00:21:41.790 "get_zone_info": false, 00:21:41.790 "zone_management": false, 00:21:41.790 "zone_append": false, 00:21:41.790 "compare": false, 00:21:41.790 "compare_and_write": false, 00:21:41.790 "abort": false, 00:21:41.790 "seek_hole": false, 00:21:41.790 "seek_data": false, 00:21:41.790 "copy": false, 00:21:41.790 "nvme_iov_md": false 00:21:41.790 }, 00:21:41.790 "driver_specific": { 00:21:41.790 "raid": { 00:21:41.790 "uuid": "3640f569-8afc-4a25-b7c3-f3705cced689", 00:21:41.790 "strip_size_kb": 64, 00:21:41.790 "state": "online", 00:21:41.790 "raid_level": "raid5f", 00:21:41.790 "superblock": false, 00:21:41.790 "num_base_bdevs": 3, 00:21:41.790 "num_base_bdevs_discovered": 3, 00:21:41.790 "num_base_bdevs_operational": 3, 00:21:41.790 "base_bdevs_list": [ 00:21:41.790 { 00:21:41.790 "name": "NewBaseBdev", 00:21:41.790 "uuid": "b9afa0fd-5862-4ee9-b1a3-cac1ba71b62b", 00:21:41.790 "is_configured": true, 00:21:41.790 "data_offset": 0, 00:21:41.790 "data_size": 65536 00:21:41.790 }, 00:21:41.790 { 00:21:41.790 "name": "BaseBdev2", 00:21:41.790 "uuid": "e799b707-f9bd-4c8a-9663-3101fb465869", 00:21:41.790 "is_configured": true, 00:21:41.790 "data_offset": 0, 00:21:41.790 "data_size": 65536 00:21:41.790 }, 00:21:41.790 { 00:21:41.790 "name": "BaseBdev3", 00:21:41.790 "uuid": "27190aa2-cc86-49c7-bc6a-e572f967e2a4", 00:21:41.790 "is_configured": true, 00:21:41.790 "data_offset": 0, 00:21:41.790 "data_size": 65536 00:21:41.790 } 00:21:41.790 ] 00:21:41.790 } 00:21:41.790 } 00:21:41.790 }' 00:21:41.790 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:42.049 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:42.049 BaseBdev2 00:21:42.049 BaseBdev3' 00:21:42.049 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:42.049 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:42.049 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:42.307 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:42.307 "name": "NewBaseBdev", 00:21:42.307 "aliases": [ 00:21:42.307 "b9afa0fd-5862-4ee9-b1a3-cac1ba71b62b" 00:21:42.307 ], 00:21:42.307 "product_name": "Malloc disk", 00:21:42.307 "block_size": 512, 00:21:42.307 "num_blocks": 65536, 00:21:42.307 "uuid": "b9afa0fd-5862-4ee9-b1a3-cac1ba71b62b", 00:21:42.307 "assigned_rate_limits": { 00:21:42.307 "rw_ios_per_sec": 0, 00:21:42.307 "rw_mbytes_per_sec": 0, 00:21:42.307 "r_mbytes_per_sec": 0, 00:21:42.307 "w_mbytes_per_sec": 0 00:21:42.307 }, 00:21:42.307 "claimed": true, 00:21:42.307 "claim_type": "exclusive_write", 00:21:42.307 "zoned": false, 00:21:42.307 "supported_io_types": { 00:21:42.307 "read": true, 00:21:42.307 "write": true, 00:21:42.307 "unmap": true, 00:21:42.307 "flush": true, 00:21:42.307 "reset": true, 00:21:42.307 "nvme_admin": false, 00:21:42.307 "nvme_io": false, 00:21:42.307 "nvme_io_md": false, 00:21:42.307 "write_zeroes": true, 00:21:42.307 "zcopy": true, 00:21:42.307 "get_zone_info": false, 00:21:42.307 "zone_management": false, 00:21:42.307 "zone_append": false, 00:21:42.307 "compare": false, 00:21:42.307 "compare_and_write": false, 00:21:42.307 "abort": true, 00:21:42.307 "seek_hole": false, 00:21:42.307 "seek_data": false, 00:21:42.307 "copy": true, 00:21:42.307 "nvme_iov_md": false 00:21:42.307 }, 00:21:42.307 "memory_domains": [ 00:21:42.307 { 00:21:42.307 "dma_device_id": "system", 00:21:42.307 "dma_device_type": 1 00:21:42.307 }, 00:21:42.307 { 00:21:42.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.307 "dma_device_type": 2 00:21:42.307 } 00:21:42.307 ], 00:21:42.307 "driver_specific": {} 00:21:42.307 }' 00:21:42.307 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:42.307 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:42.307 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:42.307 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:42.307 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:42.307 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:42.307 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.564 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.564 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:42.564 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.564 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.564 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:42.564 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:42.564 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:42.564 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:42.822 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:42.822 "name": "BaseBdev2", 00:21:42.822 "aliases": [ 00:21:42.822 "e799b707-f9bd-4c8a-9663-3101fb465869" 00:21:42.822 ], 00:21:42.822 "product_name": "Malloc disk", 00:21:42.822 "block_size": 512, 00:21:42.822 "num_blocks": 65536, 00:21:42.822 "uuid": "e799b707-f9bd-4c8a-9663-3101fb465869", 00:21:42.822 "assigned_rate_limits": { 00:21:42.822 "rw_ios_per_sec": 0, 00:21:42.822 "rw_mbytes_per_sec": 0, 00:21:42.822 "r_mbytes_per_sec": 0, 00:21:42.822 "w_mbytes_per_sec": 0 00:21:42.822 }, 00:21:42.822 "claimed": true, 00:21:42.822 "claim_type": "exclusive_write", 00:21:42.822 "zoned": false, 00:21:42.822 "supported_io_types": { 00:21:42.822 "read": true, 00:21:42.822 "write": true, 00:21:42.822 "unmap": true, 00:21:42.822 "flush": true, 00:21:42.822 "reset": true, 00:21:42.822 "nvme_admin": false, 00:21:42.822 "nvme_io": false, 00:21:42.822 "nvme_io_md": false, 00:21:42.822 "write_zeroes": true, 00:21:42.822 "zcopy": true, 00:21:42.822 "get_zone_info": false, 00:21:42.822 "zone_management": false, 00:21:42.822 "zone_append": false, 00:21:42.822 "compare": false, 00:21:42.822 "compare_and_write": false, 00:21:42.822 "abort": true, 00:21:42.822 "seek_hole": false, 00:21:42.822 "seek_data": false, 00:21:42.822 "copy": true, 00:21:42.822 "nvme_iov_md": false 00:21:42.822 }, 00:21:42.822 "memory_domains": [ 00:21:42.822 { 00:21:42.822 "dma_device_id": "system", 00:21:42.822 "dma_device_type": 1 00:21:42.822 }, 00:21:42.822 { 00:21:42.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.822 "dma_device_type": 2 00:21:42.822 } 00:21:42.822 ], 00:21:42.822 "driver_specific": {} 00:21:42.822 }' 00:21:42.822 06:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:42.822 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:42.822 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:42.822 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.081 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.081 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:43.081 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.081 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.081 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:43.081 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.081 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.081 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:43.081 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:43.081 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:43.081 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:43.340 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:43.340 "name": "BaseBdev3", 00:21:43.340 "aliases": [ 00:21:43.340 "27190aa2-cc86-49c7-bc6a-e572f967e2a4" 00:21:43.340 ], 00:21:43.340 "product_name": "Malloc disk", 00:21:43.340 "block_size": 512, 00:21:43.340 "num_blocks": 65536, 00:21:43.340 "uuid": "27190aa2-cc86-49c7-bc6a-e572f967e2a4", 00:21:43.340 "assigned_rate_limits": { 00:21:43.340 "rw_ios_per_sec": 0, 00:21:43.340 "rw_mbytes_per_sec": 0, 00:21:43.340 "r_mbytes_per_sec": 0, 00:21:43.340 "w_mbytes_per_sec": 0 00:21:43.340 }, 00:21:43.340 "claimed": true, 00:21:43.340 "claim_type": "exclusive_write", 00:21:43.340 "zoned": false, 00:21:43.340 "supported_io_types": { 00:21:43.340 "read": true, 00:21:43.340 "write": true, 00:21:43.340 "unmap": true, 00:21:43.340 "flush": true, 00:21:43.340 "reset": true, 00:21:43.340 "nvme_admin": false, 00:21:43.340 "nvme_io": false, 00:21:43.340 "nvme_io_md": false, 00:21:43.340 "write_zeroes": true, 00:21:43.340 "zcopy": true, 00:21:43.340 "get_zone_info": false, 00:21:43.340 "zone_management": false, 00:21:43.340 "zone_append": false, 00:21:43.340 "compare": false, 00:21:43.340 "compare_and_write": false, 00:21:43.340 "abort": true, 00:21:43.340 "seek_hole": false, 00:21:43.340 "seek_data": false, 00:21:43.340 "copy": true, 00:21:43.340 "nvme_iov_md": false 00:21:43.340 }, 00:21:43.340 "memory_domains": [ 00:21:43.340 { 00:21:43.340 "dma_device_id": "system", 00:21:43.340 "dma_device_type": 1 00:21:43.340 }, 00:21:43.340 { 00:21:43.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.340 "dma_device_type": 2 00:21:43.340 } 00:21:43.340 ], 00:21:43.340 "driver_specific": {} 00:21:43.340 }' 00:21:43.340 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.599 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.599 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:43.599 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.599 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.599 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:43.599 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.599 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.599 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:43.858 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.858 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.858 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:43.858 06:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:44.116 [2024-08-14 06:53:11.143310] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:44.116 [2024-08-14 06:53:11.143356] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:44.116 [2024-08-14 06:53:11.143447] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:44.116 [2024-08-14 06:53:11.143747] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:44.116 [2024-08-14 06:53:11.143759] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:21:44.117 06:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 97855 00:21:44.117 06:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 97855 ']' 00:21:44.117 06:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # kill -0 97855 00:21:44.117 06:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # uname 00:21:44.117 06:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:44.117 06:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97855 00:21:44.117 killing process with pid 97855 00:21:44.117 06:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:44.117 06:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:44.117 06:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97855' 00:21:44.117 06:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@965 -- # kill 97855 00:21:44.117 [2024-08-14 06:53:11.190922] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:44.117 06:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # wait 97855 00:21:44.117 [2024-08-14 06:53:11.224064] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:44.376 ************************************ 00:21:44.376 END TEST raid5f_state_function_test 00:21:44.376 ************************************ 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:21:44.376 00:21:44.376 real 0m30.348s 00:21:44.376 user 0m56.571s 00:21:44.376 sys 0m4.304s 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.376 06:53:11 bdev_raid -- bdev/bdev_raid.sh@966 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:21:44.376 06:53:11 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:21:44.376 06:53:11 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:44.376 06:53:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:44.376 ************************************ 00:21:44.376 START TEST raid5f_state_function_test_sb 00:21:44.376 ************************************ 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 3 true 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:44.376 Process raid pid: 98803 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=98803 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 98803' 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 98803 /var/tmp/spdk-raid.sock 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 98803 ']' 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:44.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:44.376 06:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.636 [2024-08-14 06:53:11.654130] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:21:44.636 [2024-08-14 06:53:11.654290] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.636 [2024-08-14 06:53:11.804082] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.636 [2024-08-14 06:53:11.860226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.947 [2024-08-14 06:53:11.907066] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:44.947 [2024-08-14 06:53:11.907106] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:45.516 06:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:45.516 06:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:21:45.516 06:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:45.776 [2024-08-14 06:53:12.780389] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:45.776 [2024-08-14 06:53:12.780576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:45.776 [2024-08-14 06:53:12.780601] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:45.776 [2024-08-14 06:53:12.780611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:45.776 [2024-08-14 06:53:12.780623] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:45.776 [2024-08-14 06:53:12.780631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:45.776 06:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:45.776 06:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:45.776 06:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:45.776 06:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:45.776 06:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:45.776 06:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:45.776 06:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:45.776 06:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:45.776 06:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:45.776 06:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:45.776 06:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.776 06:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.035 06:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.035 "name": "Existed_Raid", 00:21:46.035 "uuid": "badf0c46-2d42-438d-a344-b15a65a9d303", 00:21:46.035 "strip_size_kb": 64, 00:21:46.035 "state": "configuring", 00:21:46.035 "raid_level": "raid5f", 00:21:46.035 "superblock": true, 00:21:46.035 "num_base_bdevs": 3, 00:21:46.035 "num_base_bdevs_discovered": 0, 00:21:46.035 "num_base_bdevs_operational": 3, 00:21:46.035 "base_bdevs_list": [ 00:21:46.035 { 00:21:46.035 "name": "BaseBdev1", 00:21:46.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.035 "is_configured": false, 00:21:46.035 "data_offset": 0, 00:21:46.035 "data_size": 0 00:21:46.035 }, 00:21:46.035 { 00:21:46.035 "name": "BaseBdev2", 00:21:46.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.035 "is_configured": false, 00:21:46.035 "data_offset": 0, 00:21:46.035 "data_size": 0 00:21:46.035 }, 00:21:46.035 { 00:21:46.035 "name": "BaseBdev3", 00:21:46.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.035 "is_configured": false, 00:21:46.035 "data_offset": 0, 00:21:46.035 "data_size": 0 00:21:46.035 } 00:21:46.035 ] 00:21:46.035 }' 00:21:46.035 06:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.035 06:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.604 06:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:46.863 [2024-08-14 06:53:13.930358] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:46.863 [2024-08-14 06:53:13.930515] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:21:46.863 06:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:47.123 [2024-08-14 06:53:14.178107] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:47.123 [2024-08-14 06:53:14.178290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:47.123 [2024-08-14 06:53:14.178312] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:47.123 [2024-08-14 06:53:14.178323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:47.123 [2024-08-14 06:53:14.178333] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:47.123 [2024-08-14 06:53:14.178342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:47.123 06:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:47.382 [2024-08-14 06:53:14.467974] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:47.382 BaseBdev1 00:21:47.382 06:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:47.382 06:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:47.382 06:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:47.382 06:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:47.382 06:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:47.382 06:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:47.382 06:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:47.640 06:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:47.898 [ 00:21:47.898 { 00:21:47.898 "name": "BaseBdev1", 00:21:47.898 "aliases": [ 00:21:47.898 "e9e2d8dc-7f61-47fa-9ae7-47db9ceb3808" 00:21:47.898 ], 00:21:47.898 "product_name": "Malloc disk", 00:21:47.898 "block_size": 512, 00:21:47.898 "num_blocks": 65536, 00:21:47.898 "uuid": "e9e2d8dc-7f61-47fa-9ae7-47db9ceb3808", 00:21:47.898 "assigned_rate_limits": { 00:21:47.898 "rw_ios_per_sec": 0, 00:21:47.898 "rw_mbytes_per_sec": 0, 00:21:47.898 "r_mbytes_per_sec": 0, 00:21:47.898 "w_mbytes_per_sec": 0 00:21:47.898 }, 00:21:47.898 "claimed": true, 00:21:47.898 "claim_type": "exclusive_write", 00:21:47.898 "zoned": false, 00:21:47.898 "supported_io_types": { 00:21:47.898 "read": true, 00:21:47.898 "write": true, 00:21:47.898 "unmap": true, 00:21:47.898 "flush": true, 00:21:47.898 "reset": true, 00:21:47.898 "nvme_admin": false, 00:21:47.898 "nvme_io": false, 00:21:47.898 "nvme_io_md": false, 00:21:47.898 "write_zeroes": true, 00:21:47.898 "zcopy": true, 00:21:47.898 "get_zone_info": false, 00:21:47.898 "zone_management": false, 00:21:47.898 "zone_append": false, 00:21:47.898 "compare": false, 00:21:47.898 "compare_and_write": false, 00:21:47.898 "abort": true, 00:21:47.898 "seek_hole": false, 00:21:47.898 "seek_data": false, 00:21:47.898 "copy": true, 00:21:47.898 "nvme_iov_md": false 00:21:47.898 }, 00:21:47.898 "memory_domains": [ 00:21:47.898 { 00:21:47.898 "dma_device_id": "system", 00:21:47.898 "dma_device_type": 1 00:21:47.898 }, 00:21:47.898 { 00:21:47.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.898 "dma_device_type": 2 00:21:47.898 } 00:21:47.898 ], 00:21:47.898 "driver_specific": {} 00:21:47.898 } 00:21:47.898 ] 00:21:47.898 06:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:47.898 06:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:47.898 06:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:47.898 06:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:47.898 06:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:47.898 06:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:47.898 06:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:47.898 06:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.898 06:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.898 06:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.898 06:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.898 06:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.898 06:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.157 06:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:48.157 "name": "Existed_Raid", 00:21:48.157 "uuid": "5ab9d0ea-3548-4881-8b03-fff9c623cedd", 00:21:48.157 "strip_size_kb": 64, 00:21:48.157 "state": "configuring", 00:21:48.157 "raid_level": "raid5f", 00:21:48.157 "superblock": true, 00:21:48.157 "num_base_bdevs": 3, 00:21:48.157 "num_base_bdevs_discovered": 1, 00:21:48.157 "num_base_bdevs_operational": 3, 00:21:48.157 "base_bdevs_list": [ 00:21:48.157 { 00:21:48.157 "name": "BaseBdev1", 00:21:48.157 "uuid": "e9e2d8dc-7f61-47fa-9ae7-47db9ceb3808", 00:21:48.157 "is_configured": true, 00:21:48.157 "data_offset": 2048, 00:21:48.157 "data_size": 63488 00:21:48.157 }, 00:21:48.157 { 00:21:48.157 "name": "BaseBdev2", 00:21:48.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.157 "is_configured": false, 00:21:48.157 "data_offset": 0, 00:21:48.157 "data_size": 0 00:21:48.157 }, 00:21:48.157 { 00:21:48.157 "name": "BaseBdev3", 00:21:48.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.157 "is_configured": false, 00:21:48.157 "data_offset": 0, 00:21:48.157 "data_size": 0 00:21:48.157 } 00:21:48.157 ] 00:21:48.157 }' 00:21:48.157 06:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:48.157 06:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.095 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:49.095 [2024-08-14 06:53:16.266038] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:49.095 [2024-08-14 06:53:16.266242] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:21:49.095 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:49.359 [2024-08-14 06:53:16.610183] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:49.359 [2024-08-14 06:53:16.612441] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:49.359 [2024-08-14 06:53:16.612516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:49.359 [2024-08-14 06:53:16.612531] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:49.359 [2024-08-14 06:53:16.612540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:49.618 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:49.618 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:49.618 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:49.618 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:49.618 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:49.618 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:49.618 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:49.618 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:49.618 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:49.618 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:49.618 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:49.618 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:49.618 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.618 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.876 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:49.876 "name": "Existed_Raid", 00:21:49.876 "uuid": "72ccf84e-c11a-49fd-9b62-ff7363b4496e", 00:21:49.876 "strip_size_kb": 64, 00:21:49.876 "state": "configuring", 00:21:49.876 "raid_level": "raid5f", 00:21:49.876 "superblock": true, 00:21:49.876 "num_base_bdevs": 3, 00:21:49.876 "num_base_bdevs_discovered": 1, 00:21:49.876 "num_base_bdevs_operational": 3, 00:21:49.876 "base_bdevs_list": [ 00:21:49.876 { 00:21:49.876 "name": "BaseBdev1", 00:21:49.876 "uuid": "e9e2d8dc-7f61-47fa-9ae7-47db9ceb3808", 00:21:49.876 "is_configured": true, 00:21:49.876 "data_offset": 2048, 00:21:49.876 "data_size": 63488 00:21:49.876 }, 00:21:49.876 { 00:21:49.876 "name": "BaseBdev2", 00:21:49.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.876 "is_configured": false, 00:21:49.876 "data_offset": 0, 00:21:49.876 "data_size": 0 00:21:49.876 }, 00:21:49.876 { 00:21:49.876 "name": "BaseBdev3", 00:21:49.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.876 "is_configured": false, 00:21:49.876 "data_offset": 0, 00:21:49.876 "data_size": 0 00:21:49.876 } 00:21:49.876 ] 00:21:49.876 }' 00:21:49.876 06:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:49.876 06:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.443 06:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:50.702 [2024-08-14 06:53:17.812016] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:50.702 BaseBdev2 00:21:50.702 06:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:50.702 06:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:50.702 06:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:50.702 06:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:50.702 06:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:50.702 06:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:50.702 06:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:50.960 06:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:51.219 [ 00:21:51.219 { 00:21:51.219 "name": "BaseBdev2", 00:21:51.219 "aliases": [ 00:21:51.219 "d09761f7-5cc7-46af-9db2-5955d74f988b" 00:21:51.219 ], 00:21:51.219 "product_name": "Malloc disk", 00:21:51.219 "block_size": 512, 00:21:51.219 "num_blocks": 65536, 00:21:51.219 "uuid": "d09761f7-5cc7-46af-9db2-5955d74f988b", 00:21:51.219 "assigned_rate_limits": { 00:21:51.219 "rw_ios_per_sec": 0, 00:21:51.219 "rw_mbytes_per_sec": 0, 00:21:51.219 "r_mbytes_per_sec": 0, 00:21:51.219 "w_mbytes_per_sec": 0 00:21:51.219 }, 00:21:51.219 "claimed": true, 00:21:51.219 "claim_type": "exclusive_write", 00:21:51.219 "zoned": false, 00:21:51.219 "supported_io_types": { 00:21:51.219 "read": true, 00:21:51.219 "write": true, 00:21:51.219 "unmap": true, 00:21:51.219 "flush": true, 00:21:51.219 "reset": true, 00:21:51.219 "nvme_admin": false, 00:21:51.219 "nvme_io": false, 00:21:51.219 "nvme_io_md": false, 00:21:51.219 "write_zeroes": true, 00:21:51.219 "zcopy": true, 00:21:51.219 "get_zone_info": false, 00:21:51.219 "zone_management": false, 00:21:51.219 "zone_append": false, 00:21:51.219 "compare": false, 00:21:51.219 "compare_and_write": false, 00:21:51.219 "abort": true, 00:21:51.219 "seek_hole": false, 00:21:51.219 "seek_data": false, 00:21:51.219 "copy": true, 00:21:51.219 "nvme_iov_md": false 00:21:51.219 }, 00:21:51.219 "memory_domains": [ 00:21:51.219 { 00:21:51.219 "dma_device_id": "system", 00:21:51.219 "dma_device_type": 1 00:21:51.219 }, 00:21:51.219 { 00:21:51.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.219 "dma_device_type": 2 00:21:51.219 } 00:21:51.219 ], 00:21:51.219 "driver_specific": {} 00:21:51.219 } 00:21:51.219 ] 00:21:51.219 06:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:51.219 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:51.219 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:51.219 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:51.219 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:51.219 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:51.220 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:51.220 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:51.220 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:51.220 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:51.220 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:51.220 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:51.220 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:51.220 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.220 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.479 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:51.479 "name": "Existed_Raid", 00:21:51.479 "uuid": "72ccf84e-c11a-49fd-9b62-ff7363b4496e", 00:21:51.479 "strip_size_kb": 64, 00:21:51.479 "state": "configuring", 00:21:51.479 "raid_level": "raid5f", 00:21:51.479 "superblock": true, 00:21:51.479 "num_base_bdevs": 3, 00:21:51.479 "num_base_bdevs_discovered": 2, 00:21:51.479 "num_base_bdevs_operational": 3, 00:21:51.479 "base_bdevs_list": [ 00:21:51.479 { 00:21:51.479 "name": "BaseBdev1", 00:21:51.479 "uuid": "e9e2d8dc-7f61-47fa-9ae7-47db9ceb3808", 00:21:51.479 "is_configured": true, 00:21:51.479 "data_offset": 2048, 00:21:51.479 "data_size": 63488 00:21:51.479 }, 00:21:51.479 { 00:21:51.479 "name": "BaseBdev2", 00:21:51.479 "uuid": "d09761f7-5cc7-46af-9db2-5955d74f988b", 00:21:51.479 "is_configured": true, 00:21:51.479 "data_offset": 2048, 00:21:51.479 "data_size": 63488 00:21:51.479 }, 00:21:51.479 { 00:21:51.479 "name": "BaseBdev3", 00:21:51.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.479 "is_configured": false, 00:21:51.479 "data_offset": 0, 00:21:51.479 "data_size": 0 00:21:51.479 } 00:21:51.479 ] 00:21:51.479 }' 00:21:51.479 06:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:51.479 06:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.049 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:52.308 [2024-08-14 06:53:19.417524] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:52.308 [2024-08-14 06:53:19.417852] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:21:52.308 [2024-08-14 06:53:19.417920] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:52.308 [2024-08-14 06:53:19.418314] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:21:52.308 BaseBdev3 00:21:52.308 [2024-08-14 06:53:19.418853] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:21:52.308 [2024-08-14 06:53:19.418871] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:21:52.308 [2024-08-14 06:53:19.419037] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.308 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:52.308 06:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:52.308 06:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:52.308 06:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:52.308 06:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:52.308 06:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:52.308 06:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:52.567 06:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:52.826 [ 00:21:52.826 { 00:21:52.826 "name": "BaseBdev3", 00:21:52.826 "aliases": [ 00:21:52.826 "1acee0f2-c796-4b7d-bcb6-4ac614041022" 00:21:52.826 ], 00:21:52.826 "product_name": "Malloc disk", 00:21:52.826 "block_size": 512, 00:21:52.826 "num_blocks": 65536, 00:21:52.826 "uuid": "1acee0f2-c796-4b7d-bcb6-4ac614041022", 00:21:52.826 "assigned_rate_limits": { 00:21:52.826 "rw_ios_per_sec": 0, 00:21:52.826 "rw_mbytes_per_sec": 0, 00:21:52.826 "r_mbytes_per_sec": 0, 00:21:52.826 "w_mbytes_per_sec": 0 00:21:52.826 }, 00:21:52.826 "claimed": true, 00:21:52.826 "claim_type": "exclusive_write", 00:21:52.826 "zoned": false, 00:21:52.826 "supported_io_types": { 00:21:52.826 "read": true, 00:21:52.826 "write": true, 00:21:52.826 "unmap": true, 00:21:52.826 "flush": true, 00:21:52.826 "reset": true, 00:21:52.826 "nvme_admin": false, 00:21:52.826 "nvme_io": false, 00:21:52.826 "nvme_io_md": false, 00:21:52.826 "write_zeroes": true, 00:21:52.826 "zcopy": true, 00:21:52.826 "get_zone_info": false, 00:21:52.826 "zone_management": false, 00:21:52.826 "zone_append": false, 00:21:52.826 "compare": false, 00:21:52.826 "compare_and_write": false, 00:21:52.826 "abort": true, 00:21:52.826 "seek_hole": false, 00:21:52.826 "seek_data": false, 00:21:52.826 "copy": true, 00:21:52.826 "nvme_iov_md": false 00:21:52.826 }, 00:21:52.826 "memory_domains": [ 00:21:52.826 { 00:21:52.826 "dma_device_id": "system", 00:21:52.826 "dma_device_type": 1 00:21:52.826 }, 00:21:52.826 { 00:21:52.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.826 "dma_device_type": 2 00:21:52.826 } 00:21:52.826 ], 00:21:52.826 "driver_specific": {} 00:21:52.826 } 00:21:52.826 ] 00:21:52.826 06:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:52.826 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:52.826 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:52.826 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:52.826 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:52.826 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:52.826 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:52.826 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:52.826 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:52.826 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:52.826 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:52.826 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:52.826 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:52.826 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:52.826 06:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.085 06:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.085 "name": "Existed_Raid", 00:21:53.085 "uuid": "72ccf84e-c11a-49fd-9b62-ff7363b4496e", 00:21:53.085 "strip_size_kb": 64, 00:21:53.085 "state": "online", 00:21:53.085 "raid_level": "raid5f", 00:21:53.085 "superblock": true, 00:21:53.085 "num_base_bdevs": 3, 00:21:53.085 "num_base_bdevs_discovered": 3, 00:21:53.085 "num_base_bdevs_operational": 3, 00:21:53.085 "base_bdevs_list": [ 00:21:53.085 { 00:21:53.085 "name": "BaseBdev1", 00:21:53.085 "uuid": "e9e2d8dc-7f61-47fa-9ae7-47db9ceb3808", 00:21:53.085 "is_configured": true, 00:21:53.085 "data_offset": 2048, 00:21:53.085 "data_size": 63488 00:21:53.085 }, 00:21:53.085 { 00:21:53.085 "name": "BaseBdev2", 00:21:53.085 "uuid": "d09761f7-5cc7-46af-9db2-5955d74f988b", 00:21:53.085 "is_configured": true, 00:21:53.085 "data_offset": 2048, 00:21:53.085 "data_size": 63488 00:21:53.085 }, 00:21:53.085 { 00:21:53.085 "name": "BaseBdev3", 00:21:53.085 "uuid": "1acee0f2-c796-4b7d-bcb6-4ac614041022", 00:21:53.085 "is_configured": true, 00:21:53.085 "data_offset": 2048, 00:21:53.085 "data_size": 63488 00:21:53.085 } 00:21:53.085 ] 00:21:53.085 }' 00:21:53.085 06:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.085 06:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:53.677 06:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:53.677 06:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:53.677 06:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:53.677 06:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:53.677 06:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:53.677 06:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:53.677 06:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:53.677 06:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:53.936 [2024-08-14 06:53:21.171602] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:54.195 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:54.195 "name": "Existed_Raid", 00:21:54.195 "aliases": [ 00:21:54.195 "72ccf84e-c11a-49fd-9b62-ff7363b4496e" 00:21:54.195 ], 00:21:54.195 "product_name": "Raid Volume", 00:21:54.195 "block_size": 512, 00:21:54.195 "num_blocks": 126976, 00:21:54.195 "uuid": "72ccf84e-c11a-49fd-9b62-ff7363b4496e", 00:21:54.195 "assigned_rate_limits": { 00:21:54.195 "rw_ios_per_sec": 0, 00:21:54.195 "rw_mbytes_per_sec": 0, 00:21:54.195 "r_mbytes_per_sec": 0, 00:21:54.195 "w_mbytes_per_sec": 0 00:21:54.195 }, 00:21:54.195 "claimed": false, 00:21:54.195 "zoned": false, 00:21:54.195 "supported_io_types": { 00:21:54.195 "read": true, 00:21:54.195 "write": true, 00:21:54.195 "unmap": false, 00:21:54.195 "flush": false, 00:21:54.195 "reset": true, 00:21:54.195 "nvme_admin": false, 00:21:54.195 "nvme_io": false, 00:21:54.195 "nvme_io_md": false, 00:21:54.195 "write_zeroes": true, 00:21:54.195 "zcopy": false, 00:21:54.195 "get_zone_info": false, 00:21:54.195 "zone_management": false, 00:21:54.195 "zone_append": false, 00:21:54.195 "compare": false, 00:21:54.195 "compare_and_write": false, 00:21:54.195 "abort": false, 00:21:54.195 "seek_hole": false, 00:21:54.195 "seek_data": false, 00:21:54.195 "copy": false, 00:21:54.195 "nvme_iov_md": false 00:21:54.195 }, 00:21:54.195 "driver_specific": { 00:21:54.195 "raid": { 00:21:54.195 "uuid": "72ccf84e-c11a-49fd-9b62-ff7363b4496e", 00:21:54.195 "strip_size_kb": 64, 00:21:54.196 "state": "online", 00:21:54.196 "raid_level": "raid5f", 00:21:54.196 "superblock": true, 00:21:54.196 "num_base_bdevs": 3, 00:21:54.196 "num_base_bdevs_discovered": 3, 00:21:54.196 "num_base_bdevs_operational": 3, 00:21:54.196 "base_bdevs_list": [ 00:21:54.196 { 00:21:54.196 "name": "BaseBdev1", 00:21:54.196 "uuid": "e9e2d8dc-7f61-47fa-9ae7-47db9ceb3808", 00:21:54.196 "is_configured": true, 00:21:54.196 "data_offset": 2048, 00:21:54.196 "data_size": 63488 00:21:54.196 }, 00:21:54.196 { 00:21:54.196 "name": "BaseBdev2", 00:21:54.196 "uuid": "d09761f7-5cc7-46af-9db2-5955d74f988b", 00:21:54.196 "is_configured": true, 00:21:54.196 "data_offset": 2048, 00:21:54.196 "data_size": 63488 00:21:54.196 }, 00:21:54.196 { 00:21:54.196 "name": "BaseBdev3", 00:21:54.196 "uuid": "1acee0f2-c796-4b7d-bcb6-4ac614041022", 00:21:54.196 "is_configured": true, 00:21:54.196 "data_offset": 2048, 00:21:54.196 "data_size": 63488 00:21:54.196 } 00:21:54.196 ] 00:21:54.196 } 00:21:54.196 } 00:21:54.196 }' 00:21:54.196 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:54.196 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:54.196 BaseBdev2 00:21:54.196 BaseBdev3' 00:21:54.196 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:54.196 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:54.196 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:54.455 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:54.455 "name": "BaseBdev1", 00:21:54.455 "aliases": [ 00:21:54.455 "e9e2d8dc-7f61-47fa-9ae7-47db9ceb3808" 00:21:54.455 ], 00:21:54.455 "product_name": "Malloc disk", 00:21:54.455 "block_size": 512, 00:21:54.455 "num_blocks": 65536, 00:21:54.455 "uuid": "e9e2d8dc-7f61-47fa-9ae7-47db9ceb3808", 00:21:54.455 "assigned_rate_limits": { 00:21:54.455 "rw_ios_per_sec": 0, 00:21:54.455 "rw_mbytes_per_sec": 0, 00:21:54.455 "r_mbytes_per_sec": 0, 00:21:54.455 "w_mbytes_per_sec": 0 00:21:54.455 }, 00:21:54.455 "claimed": true, 00:21:54.455 "claim_type": "exclusive_write", 00:21:54.455 "zoned": false, 00:21:54.455 "supported_io_types": { 00:21:54.455 "read": true, 00:21:54.455 "write": true, 00:21:54.455 "unmap": true, 00:21:54.455 "flush": true, 00:21:54.455 "reset": true, 00:21:54.455 "nvme_admin": false, 00:21:54.455 "nvme_io": false, 00:21:54.455 "nvme_io_md": false, 00:21:54.455 "write_zeroes": true, 00:21:54.455 "zcopy": true, 00:21:54.455 "get_zone_info": false, 00:21:54.455 "zone_management": false, 00:21:54.455 "zone_append": false, 00:21:54.455 "compare": false, 00:21:54.455 "compare_and_write": false, 00:21:54.455 "abort": true, 00:21:54.455 "seek_hole": false, 00:21:54.455 "seek_data": false, 00:21:54.455 "copy": true, 00:21:54.455 "nvme_iov_md": false 00:21:54.455 }, 00:21:54.455 "memory_domains": [ 00:21:54.455 { 00:21:54.455 "dma_device_id": "system", 00:21:54.455 "dma_device_type": 1 00:21:54.455 }, 00:21:54.455 { 00:21:54.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.455 "dma_device_type": 2 00:21:54.455 } 00:21:54.455 ], 00:21:54.455 "driver_specific": {} 00:21:54.455 }' 00:21:54.455 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.455 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.455 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:54.455 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.455 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.455 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:54.455 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:54.715 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:54.715 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:54.715 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:54.715 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:54.715 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:54.715 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:54.715 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:54.715 06:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:54.974 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:54.974 "name": "BaseBdev2", 00:21:54.974 "aliases": [ 00:21:54.974 "d09761f7-5cc7-46af-9db2-5955d74f988b" 00:21:54.974 ], 00:21:54.974 "product_name": "Malloc disk", 00:21:54.974 "block_size": 512, 00:21:54.974 "num_blocks": 65536, 00:21:54.974 "uuid": "d09761f7-5cc7-46af-9db2-5955d74f988b", 00:21:54.974 "assigned_rate_limits": { 00:21:54.974 "rw_ios_per_sec": 0, 00:21:54.974 "rw_mbytes_per_sec": 0, 00:21:54.974 "r_mbytes_per_sec": 0, 00:21:54.974 "w_mbytes_per_sec": 0 00:21:54.974 }, 00:21:54.974 "claimed": true, 00:21:54.974 "claim_type": "exclusive_write", 00:21:54.974 "zoned": false, 00:21:54.974 "supported_io_types": { 00:21:54.974 "read": true, 00:21:54.974 "write": true, 00:21:54.974 "unmap": true, 00:21:54.974 "flush": true, 00:21:54.974 "reset": true, 00:21:54.974 "nvme_admin": false, 00:21:54.974 "nvme_io": false, 00:21:54.974 "nvme_io_md": false, 00:21:54.974 "write_zeroes": true, 00:21:54.974 "zcopy": true, 00:21:54.974 "get_zone_info": false, 00:21:54.974 "zone_management": false, 00:21:54.974 "zone_append": false, 00:21:54.974 "compare": false, 00:21:54.974 "compare_and_write": false, 00:21:54.974 "abort": true, 00:21:54.974 "seek_hole": false, 00:21:54.974 "seek_data": false, 00:21:54.974 "copy": true, 00:21:54.974 "nvme_iov_md": false 00:21:54.974 }, 00:21:54.974 "memory_domains": [ 00:21:54.974 { 00:21:54.974 "dma_device_id": "system", 00:21:54.974 "dma_device_type": 1 00:21:54.974 }, 00:21:54.974 { 00:21:54.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.974 "dma_device_type": 2 00:21:54.974 } 00:21:54.974 ], 00:21:54.974 "driver_specific": {} 00:21:54.974 }' 00:21:54.974 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.974 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:55.233 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:55.233 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.234 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.234 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:55.234 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:55.234 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:55.234 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:55.234 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:55.234 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:55.492 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:55.492 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:55.492 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:55.492 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:55.751 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:55.751 "name": "BaseBdev3", 00:21:55.751 "aliases": [ 00:21:55.751 "1acee0f2-c796-4b7d-bcb6-4ac614041022" 00:21:55.751 ], 00:21:55.751 "product_name": "Malloc disk", 00:21:55.751 "block_size": 512, 00:21:55.751 "num_blocks": 65536, 00:21:55.751 "uuid": "1acee0f2-c796-4b7d-bcb6-4ac614041022", 00:21:55.751 "assigned_rate_limits": { 00:21:55.751 "rw_ios_per_sec": 0, 00:21:55.752 "rw_mbytes_per_sec": 0, 00:21:55.752 "r_mbytes_per_sec": 0, 00:21:55.752 "w_mbytes_per_sec": 0 00:21:55.752 }, 00:21:55.752 "claimed": true, 00:21:55.752 "claim_type": "exclusive_write", 00:21:55.752 "zoned": false, 00:21:55.752 "supported_io_types": { 00:21:55.752 "read": true, 00:21:55.752 "write": true, 00:21:55.752 "unmap": true, 00:21:55.752 "flush": true, 00:21:55.752 "reset": true, 00:21:55.752 "nvme_admin": false, 00:21:55.752 "nvme_io": false, 00:21:55.752 "nvme_io_md": false, 00:21:55.752 "write_zeroes": true, 00:21:55.752 "zcopy": true, 00:21:55.752 "get_zone_info": false, 00:21:55.752 "zone_management": false, 00:21:55.752 "zone_append": false, 00:21:55.752 "compare": false, 00:21:55.752 "compare_and_write": false, 00:21:55.752 "abort": true, 00:21:55.752 "seek_hole": false, 00:21:55.752 "seek_data": false, 00:21:55.752 "copy": true, 00:21:55.752 "nvme_iov_md": false 00:21:55.752 }, 00:21:55.752 "memory_domains": [ 00:21:55.752 { 00:21:55.752 "dma_device_id": "system", 00:21:55.752 "dma_device_type": 1 00:21:55.752 }, 00:21:55.752 { 00:21:55.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.752 "dma_device_type": 2 00:21:55.752 } 00:21:55.752 ], 00:21:55.752 "driver_specific": {} 00:21:55.752 }' 00:21:55.752 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:55.752 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:55.752 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:55.752 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.752 06:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.752 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:56.010 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:56.010 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:56.010 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:56.010 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:56.010 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:56.010 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:56.010 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:56.269 [2024-08-14 06:53:23.408325] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.269 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.528 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:56.528 "name": "Existed_Raid", 00:21:56.528 "uuid": "72ccf84e-c11a-49fd-9b62-ff7363b4496e", 00:21:56.528 "strip_size_kb": 64, 00:21:56.528 "state": "online", 00:21:56.528 "raid_level": "raid5f", 00:21:56.528 "superblock": true, 00:21:56.528 "num_base_bdevs": 3, 00:21:56.528 "num_base_bdevs_discovered": 2, 00:21:56.528 "num_base_bdevs_operational": 2, 00:21:56.528 "base_bdevs_list": [ 00:21:56.528 { 00:21:56.528 "name": null, 00:21:56.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.528 "is_configured": false, 00:21:56.528 "data_offset": 2048, 00:21:56.528 "data_size": 63488 00:21:56.528 }, 00:21:56.528 { 00:21:56.528 "name": "BaseBdev2", 00:21:56.528 "uuid": "d09761f7-5cc7-46af-9db2-5955d74f988b", 00:21:56.528 "is_configured": true, 00:21:56.528 "data_offset": 2048, 00:21:56.528 "data_size": 63488 00:21:56.528 }, 00:21:56.528 { 00:21:56.528 "name": "BaseBdev3", 00:21:56.528 "uuid": "1acee0f2-c796-4b7d-bcb6-4ac614041022", 00:21:56.528 "is_configured": true, 00:21:56.528 "data_offset": 2048, 00:21:56.528 "data_size": 63488 00:21:56.528 } 00:21:56.528 ] 00:21:56.528 }' 00:21:56.528 06:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:56.528 06:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:57.094 06:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:57.094 06:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:57.355 06:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.355 06:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:57.355 06:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:57.355 06:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:57.355 06:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:57.616 [2024-08-14 06:53:24.810279] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:57.617 [2024-08-14 06:53:24.810559] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:57.617 [2024-08-14 06:53:24.822107] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.617 06:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:57.617 06:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:57.617 06:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.617 06:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:57.874 06:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:57.874 06:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:57.874 06:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:58.133 [2024-08-14 06:53:25.325570] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:58.133 [2024-08-14 06:53:25.325656] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:21:58.133 06:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:58.133 06:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:58.133 06:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.133 06:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:58.391 06:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:58.391 06:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:58.391 06:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:21:58.391 06:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:58.391 06:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:58.391 06:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:58.650 BaseBdev2 00:21:58.650 06:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:58.650 06:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:58.650 06:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:58.650 06:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:58.650 06:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:58.650 06:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:58.650 06:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:58.909 06:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:59.170 [ 00:21:59.170 { 00:21:59.170 "name": "BaseBdev2", 00:21:59.170 "aliases": [ 00:21:59.170 "9b87787b-cf52-4edc-bc90-9e39ceb95ec3" 00:21:59.170 ], 00:21:59.170 "product_name": "Malloc disk", 00:21:59.170 "block_size": 512, 00:21:59.170 "num_blocks": 65536, 00:21:59.170 "uuid": "9b87787b-cf52-4edc-bc90-9e39ceb95ec3", 00:21:59.170 "assigned_rate_limits": { 00:21:59.170 "rw_ios_per_sec": 0, 00:21:59.170 "rw_mbytes_per_sec": 0, 00:21:59.170 "r_mbytes_per_sec": 0, 00:21:59.170 "w_mbytes_per_sec": 0 00:21:59.170 }, 00:21:59.170 "claimed": false, 00:21:59.170 "zoned": false, 00:21:59.170 "supported_io_types": { 00:21:59.170 "read": true, 00:21:59.170 "write": true, 00:21:59.170 "unmap": true, 00:21:59.170 "flush": true, 00:21:59.170 "reset": true, 00:21:59.170 "nvme_admin": false, 00:21:59.170 "nvme_io": false, 00:21:59.170 "nvme_io_md": false, 00:21:59.170 "write_zeroes": true, 00:21:59.170 "zcopy": true, 00:21:59.170 "get_zone_info": false, 00:21:59.170 "zone_management": false, 00:21:59.170 "zone_append": false, 00:21:59.170 "compare": false, 00:21:59.170 "compare_and_write": false, 00:21:59.170 "abort": true, 00:21:59.170 "seek_hole": false, 00:21:59.170 "seek_data": false, 00:21:59.170 "copy": true, 00:21:59.170 "nvme_iov_md": false 00:21:59.170 }, 00:21:59.170 "memory_domains": [ 00:21:59.170 { 00:21:59.170 "dma_device_id": "system", 00:21:59.170 "dma_device_type": 1 00:21:59.170 }, 00:21:59.170 { 00:21:59.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.170 "dma_device_type": 2 00:21:59.170 } 00:21:59.170 ], 00:21:59.170 "driver_specific": {} 00:21:59.170 } 00:21:59.170 ] 00:21:59.170 06:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:59.170 06:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:59.170 06:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:59.170 06:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:59.430 BaseBdev3 00:21:59.430 06:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:59.430 06:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:59.430 06:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:59.430 06:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:59.430 06:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:59.430 06:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:59.430 06:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:59.690 06:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:59.949 [ 00:21:59.949 { 00:21:59.949 "name": "BaseBdev3", 00:21:59.949 "aliases": [ 00:21:59.949 "ece1bb58-4e70-4899-9aa1-7c45e26947c0" 00:21:59.949 ], 00:21:59.949 "product_name": "Malloc disk", 00:21:59.949 "block_size": 512, 00:21:59.949 "num_blocks": 65536, 00:21:59.949 "uuid": "ece1bb58-4e70-4899-9aa1-7c45e26947c0", 00:21:59.949 "assigned_rate_limits": { 00:21:59.949 "rw_ios_per_sec": 0, 00:21:59.949 "rw_mbytes_per_sec": 0, 00:21:59.949 "r_mbytes_per_sec": 0, 00:21:59.949 "w_mbytes_per_sec": 0 00:21:59.949 }, 00:21:59.949 "claimed": false, 00:21:59.949 "zoned": false, 00:21:59.949 "supported_io_types": { 00:21:59.949 "read": true, 00:21:59.949 "write": true, 00:21:59.949 "unmap": true, 00:21:59.949 "flush": true, 00:21:59.949 "reset": true, 00:21:59.949 "nvme_admin": false, 00:21:59.949 "nvme_io": false, 00:21:59.949 "nvme_io_md": false, 00:21:59.949 "write_zeroes": true, 00:21:59.949 "zcopy": true, 00:21:59.949 "get_zone_info": false, 00:21:59.949 "zone_management": false, 00:21:59.949 "zone_append": false, 00:21:59.949 "compare": false, 00:21:59.949 "compare_and_write": false, 00:21:59.949 "abort": true, 00:21:59.949 "seek_hole": false, 00:21:59.949 "seek_data": false, 00:21:59.949 "copy": true, 00:21:59.949 "nvme_iov_md": false 00:21:59.949 }, 00:21:59.949 "memory_domains": [ 00:21:59.949 { 00:21:59.949 "dma_device_id": "system", 00:21:59.949 "dma_device_type": 1 00:21:59.949 }, 00:21:59.949 { 00:21:59.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.949 "dma_device_type": 2 00:21:59.949 } 00:21:59.949 ], 00:21:59.949 "driver_specific": {} 00:21:59.949 } 00:21:59.949 ] 00:21:59.949 06:53:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:59.949 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:59.949 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:59.949 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:00.209 [2024-08-14 06:53:27.373321] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:00.209 [2024-08-14 06:53:27.373482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:00.209 [2024-08-14 06:53:27.373536] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:00.209 [2024-08-14 06:53:27.375714] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:00.209 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:00.209 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:00.209 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:00.209 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:00.209 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:00.209 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:00.209 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.209 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.209 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.209 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.210 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.210 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.469 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:00.469 "name": "Existed_Raid", 00:22:00.469 "uuid": "6713aeee-ffd8-4041-83ac-96681286794d", 00:22:00.469 "strip_size_kb": 64, 00:22:00.469 "state": "configuring", 00:22:00.469 "raid_level": "raid5f", 00:22:00.469 "superblock": true, 00:22:00.469 "num_base_bdevs": 3, 00:22:00.469 "num_base_bdevs_discovered": 2, 00:22:00.469 "num_base_bdevs_operational": 3, 00:22:00.469 "base_bdevs_list": [ 00:22:00.469 { 00:22:00.469 "name": "BaseBdev1", 00:22:00.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.469 "is_configured": false, 00:22:00.469 "data_offset": 0, 00:22:00.469 "data_size": 0 00:22:00.469 }, 00:22:00.469 { 00:22:00.469 "name": "BaseBdev2", 00:22:00.469 "uuid": "9b87787b-cf52-4edc-bc90-9e39ceb95ec3", 00:22:00.469 "is_configured": true, 00:22:00.469 "data_offset": 2048, 00:22:00.469 "data_size": 63488 00:22:00.469 }, 00:22:00.469 { 00:22:00.469 "name": "BaseBdev3", 00:22:00.469 "uuid": "ece1bb58-4e70-4899-9aa1-7c45e26947c0", 00:22:00.469 "is_configured": true, 00:22:00.469 "data_offset": 2048, 00:22:00.469 "data_size": 63488 00:22:00.469 } 00:22:00.469 ] 00:22:00.469 }' 00:22:00.469 06:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:00.469 06:53:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.410 06:53:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:01.410 [2024-08-14 06:53:28.539387] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:01.410 06:53:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:01.410 06:53:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:01.410 06:53:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:01.410 06:53:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:01.410 06:53:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:01.410 06:53:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:01.410 06:53:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:01.410 06:53:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:01.410 06:53:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:01.410 06:53:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:01.410 06:53:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.410 06:53:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.697 06:53:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:01.697 "name": "Existed_Raid", 00:22:01.697 "uuid": "6713aeee-ffd8-4041-83ac-96681286794d", 00:22:01.697 "strip_size_kb": 64, 00:22:01.697 "state": "configuring", 00:22:01.697 "raid_level": "raid5f", 00:22:01.697 "superblock": true, 00:22:01.697 "num_base_bdevs": 3, 00:22:01.697 "num_base_bdevs_discovered": 1, 00:22:01.697 "num_base_bdevs_operational": 3, 00:22:01.697 "base_bdevs_list": [ 00:22:01.697 { 00:22:01.697 "name": "BaseBdev1", 00:22:01.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.697 "is_configured": false, 00:22:01.697 "data_offset": 0, 00:22:01.697 "data_size": 0 00:22:01.697 }, 00:22:01.697 { 00:22:01.697 "name": null, 00:22:01.697 "uuid": "9b87787b-cf52-4edc-bc90-9e39ceb95ec3", 00:22:01.697 "is_configured": false, 00:22:01.697 "data_offset": 2048, 00:22:01.697 "data_size": 63488 00:22:01.697 }, 00:22:01.697 { 00:22:01.697 "name": "BaseBdev3", 00:22:01.697 "uuid": "ece1bb58-4e70-4899-9aa1-7c45e26947c0", 00:22:01.697 "is_configured": true, 00:22:01.697 "data_offset": 2048, 00:22:01.697 "data_size": 63488 00:22:01.697 } 00:22:01.697 ] 00:22:01.697 }' 00:22:01.697 06:53:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:01.697 06:53:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.284 06:53:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.284 06:53:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:02.541 06:53:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:02.541 06:53:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:02.799 [2024-08-14 06:53:29.896748] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:02.799 BaseBdev1 00:22:02.799 06:53:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:02.799 06:53:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:22:02.799 06:53:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:02.799 06:53:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:02.799 06:53:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:02.799 06:53:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:02.799 06:53:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:03.057 06:53:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:03.315 [ 00:22:03.315 { 00:22:03.315 "name": "BaseBdev1", 00:22:03.315 "aliases": [ 00:22:03.315 "ad062fb7-ed46-44b1-9cb9-eb2cb2db7f79" 00:22:03.315 ], 00:22:03.315 "product_name": "Malloc disk", 00:22:03.315 "block_size": 512, 00:22:03.315 "num_blocks": 65536, 00:22:03.315 "uuid": "ad062fb7-ed46-44b1-9cb9-eb2cb2db7f79", 00:22:03.315 "assigned_rate_limits": { 00:22:03.315 "rw_ios_per_sec": 0, 00:22:03.315 "rw_mbytes_per_sec": 0, 00:22:03.315 "r_mbytes_per_sec": 0, 00:22:03.315 "w_mbytes_per_sec": 0 00:22:03.315 }, 00:22:03.315 "claimed": true, 00:22:03.315 "claim_type": "exclusive_write", 00:22:03.315 "zoned": false, 00:22:03.315 "supported_io_types": { 00:22:03.315 "read": true, 00:22:03.315 "write": true, 00:22:03.315 "unmap": true, 00:22:03.315 "flush": true, 00:22:03.315 "reset": true, 00:22:03.315 "nvme_admin": false, 00:22:03.315 "nvme_io": false, 00:22:03.315 "nvme_io_md": false, 00:22:03.315 "write_zeroes": true, 00:22:03.315 "zcopy": true, 00:22:03.315 "get_zone_info": false, 00:22:03.315 "zone_management": false, 00:22:03.315 "zone_append": false, 00:22:03.315 "compare": false, 00:22:03.315 "compare_and_write": false, 00:22:03.315 "abort": true, 00:22:03.315 "seek_hole": false, 00:22:03.315 "seek_data": false, 00:22:03.315 "copy": true, 00:22:03.315 "nvme_iov_md": false 00:22:03.315 }, 00:22:03.315 "memory_domains": [ 00:22:03.315 { 00:22:03.315 "dma_device_id": "system", 00:22:03.315 "dma_device_type": 1 00:22:03.315 }, 00:22:03.315 { 00:22:03.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.315 "dma_device_type": 2 00:22:03.315 } 00:22:03.315 ], 00:22:03.315 "driver_specific": {} 00:22:03.315 } 00:22:03.315 ] 00:22:03.315 06:53:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:03.315 06:53:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:03.315 06:53:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:03.315 06:53:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:03.315 06:53:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:03.315 06:53:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:03.315 06:53:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:03.315 06:53:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:03.315 06:53:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:03.315 06:53:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:03.316 06:53:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:03.316 06:53:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.316 06:53:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.574 06:53:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.574 "name": "Existed_Raid", 00:22:03.574 "uuid": "6713aeee-ffd8-4041-83ac-96681286794d", 00:22:03.574 "strip_size_kb": 64, 00:22:03.574 "state": "configuring", 00:22:03.574 "raid_level": "raid5f", 00:22:03.574 "superblock": true, 00:22:03.574 "num_base_bdevs": 3, 00:22:03.574 "num_base_bdevs_discovered": 2, 00:22:03.574 "num_base_bdevs_operational": 3, 00:22:03.574 "base_bdevs_list": [ 00:22:03.574 { 00:22:03.574 "name": "BaseBdev1", 00:22:03.574 "uuid": "ad062fb7-ed46-44b1-9cb9-eb2cb2db7f79", 00:22:03.574 "is_configured": true, 00:22:03.574 "data_offset": 2048, 00:22:03.574 "data_size": 63488 00:22:03.574 }, 00:22:03.574 { 00:22:03.574 "name": null, 00:22:03.574 "uuid": "9b87787b-cf52-4edc-bc90-9e39ceb95ec3", 00:22:03.574 "is_configured": false, 00:22:03.574 "data_offset": 2048, 00:22:03.574 "data_size": 63488 00:22:03.574 }, 00:22:03.574 { 00:22:03.574 "name": "BaseBdev3", 00:22:03.574 "uuid": "ece1bb58-4e70-4899-9aa1-7c45e26947c0", 00:22:03.574 "is_configured": true, 00:22:03.574 "data_offset": 2048, 00:22:03.574 "data_size": 63488 00:22:03.574 } 00:22:03.574 ] 00:22:03.574 }' 00:22:03.574 06:53:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.574 06:53:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.140 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.140 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:04.397 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:04.397 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:04.656 [2024-08-14 06:53:31.797810] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:04.656 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:04.656 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:04.656 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:04.656 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:04.656 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:04.656 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:04.656 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:04.656 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:04.656 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:04.656 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:04.656 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.656 06:53:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.914 06:53:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.914 "name": "Existed_Raid", 00:22:04.914 "uuid": "6713aeee-ffd8-4041-83ac-96681286794d", 00:22:04.914 "strip_size_kb": 64, 00:22:04.914 "state": "configuring", 00:22:04.914 "raid_level": "raid5f", 00:22:04.914 "superblock": true, 00:22:04.914 "num_base_bdevs": 3, 00:22:04.914 "num_base_bdevs_discovered": 1, 00:22:04.914 "num_base_bdevs_operational": 3, 00:22:04.914 "base_bdevs_list": [ 00:22:04.914 { 00:22:04.914 "name": "BaseBdev1", 00:22:04.914 "uuid": "ad062fb7-ed46-44b1-9cb9-eb2cb2db7f79", 00:22:04.914 "is_configured": true, 00:22:04.914 "data_offset": 2048, 00:22:04.914 "data_size": 63488 00:22:04.914 }, 00:22:04.914 { 00:22:04.914 "name": null, 00:22:04.914 "uuid": "9b87787b-cf52-4edc-bc90-9e39ceb95ec3", 00:22:04.914 "is_configured": false, 00:22:04.914 "data_offset": 2048, 00:22:04.914 "data_size": 63488 00:22:04.914 }, 00:22:04.914 { 00:22:04.914 "name": null, 00:22:04.914 "uuid": "ece1bb58-4e70-4899-9aa1-7c45e26947c0", 00:22:04.914 "is_configured": false, 00:22:04.914 "data_offset": 2048, 00:22:04.914 "data_size": 63488 00:22:04.914 } 00:22:04.914 ] 00:22:04.914 }' 00:22:04.914 06:53:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.914 06:53:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.479 06:53:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.479 06:53:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:05.737 06:53:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:05.737 06:53:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:05.996 [2024-08-14 06:53:33.215469] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:05.996 06:53:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:05.997 06:53:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:05.997 06:53:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:05.997 06:53:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:05.997 06:53:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:05.997 06:53:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:05.997 06:53:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:05.997 06:53:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:05.997 06:53:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:05.997 06:53:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:05.997 06:53:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.997 06:53:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.625 06:53:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:06.625 "name": "Existed_Raid", 00:22:06.625 "uuid": "6713aeee-ffd8-4041-83ac-96681286794d", 00:22:06.625 "strip_size_kb": 64, 00:22:06.625 "state": "configuring", 00:22:06.625 "raid_level": "raid5f", 00:22:06.625 "superblock": true, 00:22:06.625 "num_base_bdevs": 3, 00:22:06.625 "num_base_bdevs_discovered": 2, 00:22:06.625 "num_base_bdevs_operational": 3, 00:22:06.625 "base_bdevs_list": [ 00:22:06.625 { 00:22:06.625 "name": "BaseBdev1", 00:22:06.625 "uuid": "ad062fb7-ed46-44b1-9cb9-eb2cb2db7f79", 00:22:06.625 "is_configured": true, 00:22:06.625 "data_offset": 2048, 00:22:06.625 "data_size": 63488 00:22:06.625 }, 00:22:06.625 { 00:22:06.625 "name": null, 00:22:06.625 "uuid": "9b87787b-cf52-4edc-bc90-9e39ceb95ec3", 00:22:06.625 "is_configured": false, 00:22:06.625 "data_offset": 2048, 00:22:06.625 "data_size": 63488 00:22:06.625 }, 00:22:06.625 { 00:22:06.625 "name": "BaseBdev3", 00:22:06.625 "uuid": "ece1bb58-4e70-4899-9aa1-7c45e26947c0", 00:22:06.625 "is_configured": true, 00:22:06.625 "data_offset": 2048, 00:22:06.625 "data_size": 63488 00:22:06.625 } 00:22:06.625 ] 00:22:06.625 }' 00:22:06.625 06:53:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:06.625 06:53:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.190 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:07.190 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.448 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:07.448 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:07.706 [2024-08-14 06:53:34.714113] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:07.706 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:07.706 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:07.706 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:07.706 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:07.706 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:07.706 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:07.706 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:07.706 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:07.706 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:07.706 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:07.706 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.706 06:53:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.964 06:53:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:07.964 "name": "Existed_Raid", 00:22:07.964 "uuid": "6713aeee-ffd8-4041-83ac-96681286794d", 00:22:07.964 "strip_size_kb": 64, 00:22:07.964 "state": "configuring", 00:22:07.964 "raid_level": "raid5f", 00:22:07.964 "superblock": true, 00:22:07.964 "num_base_bdevs": 3, 00:22:07.964 "num_base_bdevs_discovered": 1, 00:22:07.964 "num_base_bdevs_operational": 3, 00:22:07.964 "base_bdevs_list": [ 00:22:07.964 { 00:22:07.964 "name": null, 00:22:07.964 "uuid": "ad062fb7-ed46-44b1-9cb9-eb2cb2db7f79", 00:22:07.964 "is_configured": false, 00:22:07.964 "data_offset": 2048, 00:22:07.964 "data_size": 63488 00:22:07.964 }, 00:22:07.964 { 00:22:07.964 "name": null, 00:22:07.964 "uuid": "9b87787b-cf52-4edc-bc90-9e39ceb95ec3", 00:22:07.964 "is_configured": false, 00:22:07.964 "data_offset": 2048, 00:22:07.964 "data_size": 63488 00:22:07.964 }, 00:22:07.964 { 00:22:07.964 "name": "BaseBdev3", 00:22:07.964 "uuid": "ece1bb58-4e70-4899-9aa1-7c45e26947c0", 00:22:07.964 "is_configured": true, 00:22:07.964 "data_offset": 2048, 00:22:07.964 "data_size": 63488 00:22:07.964 } 00:22:07.964 ] 00:22:07.964 }' 00:22:07.964 06:53:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:07.964 06:53:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.530 06:53:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.530 06:53:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:08.788 06:53:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:08.788 06:53:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:09.047 [2024-08-14 06:53:36.099124] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:09.047 06:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:09.047 06:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:09.047 06:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:09.047 06:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:09.047 06:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:09.047 06:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:09.047 06:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:09.047 06:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:09.047 06:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:09.047 06:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:09.047 06:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.047 06:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.305 06:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:09.305 "name": "Existed_Raid", 00:22:09.305 "uuid": "6713aeee-ffd8-4041-83ac-96681286794d", 00:22:09.305 "strip_size_kb": 64, 00:22:09.305 "state": "configuring", 00:22:09.305 "raid_level": "raid5f", 00:22:09.305 "superblock": true, 00:22:09.305 "num_base_bdevs": 3, 00:22:09.305 "num_base_bdevs_discovered": 2, 00:22:09.305 "num_base_bdevs_operational": 3, 00:22:09.305 "base_bdevs_list": [ 00:22:09.305 { 00:22:09.305 "name": null, 00:22:09.305 "uuid": "ad062fb7-ed46-44b1-9cb9-eb2cb2db7f79", 00:22:09.305 "is_configured": false, 00:22:09.305 "data_offset": 2048, 00:22:09.305 "data_size": 63488 00:22:09.305 }, 00:22:09.305 { 00:22:09.305 "name": "BaseBdev2", 00:22:09.305 "uuid": "9b87787b-cf52-4edc-bc90-9e39ceb95ec3", 00:22:09.305 "is_configured": true, 00:22:09.305 "data_offset": 2048, 00:22:09.305 "data_size": 63488 00:22:09.305 }, 00:22:09.305 { 00:22:09.305 "name": "BaseBdev3", 00:22:09.305 "uuid": "ece1bb58-4e70-4899-9aa1-7c45e26947c0", 00:22:09.305 "is_configured": true, 00:22:09.305 "data_offset": 2048, 00:22:09.305 "data_size": 63488 00:22:09.305 } 00:22:09.305 ] 00:22:09.305 }' 00:22:09.305 06:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:09.305 06:53:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.873 06:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.873 06:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:10.132 06:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:10.132 06:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.132 06:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:10.391 06:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ad062fb7-ed46-44b1-9cb9-eb2cb2db7f79 00:22:10.651 [2024-08-14 06:53:37.695952] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:10.651 [2024-08-14 06:53:37.696157] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:22:10.651 [2024-08-14 06:53:37.696171] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:10.651 [2024-08-14 06:53:37.696445] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:22:10.651 NewBaseBdev 00:22:10.651 [2024-08-14 06:53:37.696871] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:22:10.651 [2024-08-14 06:53:37.696899] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:22:10.651 [2024-08-14 06:53:37.697010] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.651 06:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:10.651 06:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:22:10.651 06:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:10.651 06:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:10.651 06:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:10.651 06:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:10.651 06:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:10.912 06:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:11.172 [ 00:22:11.172 { 00:22:11.172 "name": "NewBaseBdev", 00:22:11.172 "aliases": [ 00:22:11.172 "ad062fb7-ed46-44b1-9cb9-eb2cb2db7f79" 00:22:11.172 ], 00:22:11.172 "product_name": "Malloc disk", 00:22:11.172 "block_size": 512, 00:22:11.172 "num_blocks": 65536, 00:22:11.172 "uuid": "ad062fb7-ed46-44b1-9cb9-eb2cb2db7f79", 00:22:11.172 "assigned_rate_limits": { 00:22:11.172 "rw_ios_per_sec": 0, 00:22:11.172 "rw_mbytes_per_sec": 0, 00:22:11.172 "r_mbytes_per_sec": 0, 00:22:11.172 "w_mbytes_per_sec": 0 00:22:11.172 }, 00:22:11.172 "claimed": true, 00:22:11.172 "claim_type": "exclusive_write", 00:22:11.172 "zoned": false, 00:22:11.172 "supported_io_types": { 00:22:11.172 "read": true, 00:22:11.172 "write": true, 00:22:11.172 "unmap": true, 00:22:11.172 "flush": true, 00:22:11.172 "reset": true, 00:22:11.172 "nvme_admin": false, 00:22:11.172 "nvme_io": false, 00:22:11.172 "nvme_io_md": false, 00:22:11.172 "write_zeroes": true, 00:22:11.172 "zcopy": true, 00:22:11.172 "get_zone_info": false, 00:22:11.172 "zone_management": false, 00:22:11.172 "zone_append": false, 00:22:11.172 "compare": false, 00:22:11.172 "compare_and_write": false, 00:22:11.172 "abort": true, 00:22:11.172 "seek_hole": false, 00:22:11.172 "seek_data": false, 00:22:11.172 "copy": true, 00:22:11.172 "nvme_iov_md": false 00:22:11.172 }, 00:22:11.172 "memory_domains": [ 00:22:11.172 { 00:22:11.172 "dma_device_id": "system", 00:22:11.172 "dma_device_type": 1 00:22:11.172 }, 00:22:11.172 { 00:22:11.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.172 "dma_device_type": 2 00:22:11.172 } 00:22:11.172 ], 00:22:11.172 "driver_specific": {} 00:22:11.172 } 00:22:11.172 ] 00:22:11.172 06:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:11.172 06:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:11.172 06:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:11.172 06:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:11.172 06:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:11.172 06:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:11.172 06:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:11.172 06:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.172 06:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.172 06:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.172 06:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:11.172 06:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.172 06:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.431 06:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:11.432 "name": "Existed_Raid", 00:22:11.432 "uuid": "6713aeee-ffd8-4041-83ac-96681286794d", 00:22:11.432 "strip_size_kb": 64, 00:22:11.432 "state": "online", 00:22:11.432 "raid_level": "raid5f", 00:22:11.432 "superblock": true, 00:22:11.432 "num_base_bdevs": 3, 00:22:11.432 "num_base_bdevs_discovered": 3, 00:22:11.432 "num_base_bdevs_operational": 3, 00:22:11.432 "base_bdevs_list": [ 00:22:11.432 { 00:22:11.432 "name": "NewBaseBdev", 00:22:11.432 "uuid": "ad062fb7-ed46-44b1-9cb9-eb2cb2db7f79", 00:22:11.432 "is_configured": true, 00:22:11.432 "data_offset": 2048, 00:22:11.432 "data_size": 63488 00:22:11.432 }, 00:22:11.432 { 00:22:11.432 "name": "BaseBdev2", 00:22:11.432 "uuid": "9b87787b-cf52-4edc-bc90-9e39ceb95ec3", 00:22:11.432 "is_configured": true, 00:22:11.432 "data_offset": 2048, 00:22:11.432 "data_size": 63488 00:22:11.432 }, 00:22:11.432 { 00:22:11.432 "name": "BaseBdev3", 00:22:11.432 "uuid": "ece1bb58-4e70-4899-9aa1-7c45e26947c0", 00:22:11.432 "is_configured": true, 00:22:11.432 "data_offset": 2048, 00:22:11.432 "data_size": 63488 00:22:11.432 } 00:22:11.432 ] 00:22:11.432 }' 00:22:11.432 06:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:11.432 06:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.001 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:12.001 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:12.001 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:12.001 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:12.001 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:12.001 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:12.001 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:12.001 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:12.261 [2024-08-14 06:53:39.277696] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:12.261 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:12.261 "name": "Existed_Raid", 00:22:12.261 "aliases": [ 00:22:12.261 "6713aeee-ffd8-4041-83ac-96681286794d" 00:22:12.261 ], 00:22:12.261 "product_name": "Raid Volume", 00:22:12.261 "block_size": 512, 00:22:12.261 "num_blocks": 126976, 00:22:12.261 "uuid": "6713aeee-ffd8-4041-83ac-96681286794d", 00:22:12.261 "assigned_rate_limits": { 00:22:12.261 "rw_ios_per_sec": 0, 00:22:12.261 "rw_mbytes_per_sec": 0, 00:22:12.261 "r_mbytes_per_sec": 0, 00:22:12.261 "w_mbytes_per_sec": 0 00:22:12.261 }, 00:22:12.261 "claimed": false, 00:22:12.261 "zoned": false, 00:22:12.261 "supported_io_types": { 00:22:12.261 "read": true, 00:22:12.261 "write": true, 00:22:12.261 "unmap": false, 00:22:12.261 "flush": false, 00:22:12.261 "reset": true, 00:22:12.261 "nvme_admin": false, 00:22:12.261 "nvme_io": false, 00:22:12.261 "nvme_io_md": false, 00:22:12.261 "write_zeroes": true, 00:22:12.261 "zcopy": false, 00:22:12.261 "get_zone_info": false, 00:22:12.261 "zone_management": false, 00:22:12.261 "zone_append": false, 00:22:12.261 "compare": false, 00:22:12.261 "compare_and_write": false, 00:22:12.261 "abort": false, 00:22:12.261 "seek_hole": false, 00:22:12.261 "seek_data": false, 00:22:12.261 "copy": false, 00:22:12.261 "nvme_iov_md": false 00:22:12.261 }, 00:22:12.261 "driver_specific": { 00:22:12.261 "raid": { 00:22:12.261 "uuid": "6713aeee-ffd8-4041-83ac-96681286794d", 00:22:12.261 "strip_size_kb": 64, 00:22:12.261 "state": "online", 00:22:12.261 "raid_level": "raid5f", 00:22:12.261 "superblock": true, 00:22:12.261 "num_base_bdevs": 3, 00:22:12.261 "num_base_bdevs_discovered": 3, 00:22:12.261 "num_base_bdevs_operational": 3, 00:22:12.261 "base_bdevs_list": [ 00:22:12.261 { 00:22:12.261 "name": "NewBaseBdev", 00:22:12.261 "uuid": "ad062fb7-ed46-44b1-9cb9-eb2cb2db7f79", 00:22:12.261 "is_configured": true, 00:22:12.261 "data_offset": 2048, 00:22:12.261 "data_size": 63488 00:22:12.261 }, 00:22:12.261 { 00:22:12.261 "name": "BaseBdev2", 00:22:12.261 "uuid": "9b87787b-cf52-4edc-bc90-9e39ceb95ec3", 00:22:12.261 "is_configured": true, 00:22:12.261 "data_offset": 2048, 00:22:12.261 "data_size": 63488 00:22:12.261 }, 00:22:12.261 { 00:22:12.261 "name": "BaseBdev3", 00:22:12.261 "uuid": "ece1bb58-4e70-4899-9aa1-7c45e26947c0", 00:22:12.261 "is_configured": true, 00:22:12.261 "data_offset": 2048, 00:22:12.261 "data_size": 63488 00:22:12.261 } 00:22:12.261 ] 00:22:12.261 } 00:22:12.261 } 00:22:12.261 }' 00:22:12.261 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:12.261 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:12.261 BaseBdev2 00:22:12.261 BaseBdev3' 00:22:12.261 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:12.261 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:12.261 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:12.522 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:12.522 "name": "NewBaseBdev", 00:22:12.522 "aliases": [ 00:22:12.522 "ad062fb7-ed46-44b1-9cb9-eb2cb2db7f79" 00:22:12.522 ], 00:22:12.522 "product_name": "Malloc disk", 00:22:12.522 "block_size": 512, 00:22:12.522 "num_blocks": 65536, 00:22:12.522 "uuid": "ad062fb7-ed46-44b1-9cb9-eb2cb2db7f79", 00:22:12.522 "assigned_rate_limits": { 00:22:12.522 "rw_ios_per_sec": 0, 00:22:12.522 "rw_mbytes_per_sec": 0, 00:22:12.522 "r_mbytes_per_sec": 0, 00:22:12.522 "w_mbytes_per_sec": 0 00:22:12.522 }, 00:22:12.522 "claimed": true, 00:22:12.522 "claim_type": "exclusive_write", 00:22:12.522 "zoned": false, 00:22:12.522 "supported_io_types": { 00:22:12.522 "read": true, 00:22:12.522 "write": true, 00:22:12.522 "unmap": true, 00:22:12.522 "flush": true, 00:22:12.522 "reset": true, 00:22:12.522 "nvme_admin": false, 00:22:12.522 "nvme_io": false, 00:22:12.522 "nvme_io_md": false, 00:22:12.522 "write_zeroes": true, 00:22:12.522 "zcopy": true, 00:22:12.522 "get_zone_info": false, 00:22:12.522 "zone_management": false, 00:22:12.522 "zone_append": false, 00:22:12.522 "compare": false, 00:22:12.522 "compare_and_write": false, 00:22:12.522 "abort": true, 00:22:12.522 "seek_hole": false, 00:22:12.522 "seek_data": false, 00:22:12.522 "copy": true, 00:22:12.522 "nvme_iov_md": false 00:22:12.522 }, 00:22:12.522 "memory_domains": [ 00:22:12.522 { 00:22:12.522 "dma_device_id": "system", 00:22:12.522 "dma_device_type": 1 00:22:12.522 }, 00:22:12.522 { 00:22:12.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.522 "dma_device_type": 2 00:22:12.522 } 00:22:12.522 ], 00:22:12.522 "driver_specific": {} 00:22:12.522 }' 00:22:12.522 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:12.522 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:12.522 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:12.522 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:12.522 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:12.782 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:12.782 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:12.782 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:12.782 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:12.782 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:12.782 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:12.782 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:12.782 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:12.782 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:12.782 06:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:13.042 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:13.042 "name": "BaseBdev2", 00:22:13.042 "aliases": [ 00:22:13.042 "9b87787b-cf52-4edc-bc90-9e39ceb95ec3" 00:22:13.042 ], 00:22:13.042 "product_name": "Malloc disk", 00:22:13.042 "block_size": 512, 00:22:13.042 "num_blocks": 65536, 00:22:13.042 "uuid": "9b87787b-cf52-4edc-bc90-9e39ceb95ec3", 00:22:13.042 "assigned_rate_limits": { 00:22:13.042 "rw_ios_per_sec": 0, 00:22:13.042 "rw_mbytes_per_sec": 0, 00:22:13.042 "r_mbytes_per_sec": 0, 00:22:13.042 "w_mbytes_per_sec": 0 00:22:13.042 }, 00:22:13.042 "claimed": true, 00:22:13.042 "claim_type": "exclusive_write", 00:22:13.042 "zoned": false, 00:22:13.042 "supported_io_types": { 00:22:13.042 "read": true, 00:22:13.042 "write": true, 00:22:13.042 "unmap": true, 00:22:13.042 "flush": true, 00:22:13.042 "reset": true, 00:22:13.042 "nvme_admin": false, 00:22:13.042 "nvme_io": false, 00:22:13.042 "nvme_io_md": false, 00:22:13.042 "write_zeroes": true, 00:22:13.042 "zcopy": true, 00:22:13.042 "get_zone_info": false, 00:22:13.042 "zone_management": false, 00:22:13.042 "zone_append": false, 00:22:13.042 "compare": false, 00:22:13.042 "compare_and_write": false, 00:22:13.042 "abort": true, 00:22:13.042 "seek_hole": false, 00:22:13.042 "seek_data": false, 00:22:13.042 "copy": true, 00:22:13.042 "nvme_iov_md": false 00:22:13.042 }, 00:22:13.042 "memory_domains": [ 00:22:13.042 { 00:22:13.042 "dma_device_id": "system", 00:22:13.042 "dma_device_type": 1 00:22:13.042 }, 00:22:13.042 { 00:22:13.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.043 "dma_device_type": 2 00:22:13.043 } 00:22:13.043 ], 00:22:13.043 "driver_specific": {} 00:22:13.043 }' 00:22:13.043 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:13.043 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:13.301 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:13.301 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:13.301 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:13.301 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:13.301 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:13.301 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:13.301 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:13.301 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:13.301 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:13.559 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:13.559 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:13.559 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:13.559 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:13.818 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:13.818 "name": "BaseBdev3", 00:22:13.818 "aliases": [ 00:22:13.818 "ece1bb58-4e70-4899-9aa1-7c45e26947c0" 00:22:13.818 ], 00:22:13.818 "product_name": "Malloc disk", 00:22:13.818 "block_size": 512, 00:22:13.818 "num_blocks": 65536, 00:22:13.818 "uuid": "ece1bb58-4e70-4899-9aa1-7c45e26947c0", 00:22:13.818 "assigned_rate_limits": { 00:22:13.818 "rw_ios_per_sec": 0, 00:22:13.818 "rw_mbytes_per_sec": 0, 00:22:13.818 "r_mbytes_per_sec": 0, 00:22:13.818 "w_mbytes_per_sec": 0 00:22:13.818 }, 00:22:13.818 "claimed": true, 00:22:13.818 "claim_type": "exclusive_write", 00:22:13.818 "zoned": false, 00:22:13.818 "supported_io_types": { 00:22:13.818 "read": true, 00:22:13.818 "write": true, 00:22:13.818 "unmap": true, 00:22:13.818 "flush": true, 00:22:13.818 "reset": true, 00:22:13.818 "nvme_admin": false, 00:22:13.818 "nvme_io": false, 00:22:13.818 "nvme_io_md": false, 00:22:13.818 "write_zeroes": true, 00:22:13.818 "zcopy": true, 00:22:13.818 "get_zone_info": false, 00:22:13.818 "zone_management": false, 00:22:13.818 "zone_append": false, 00:22:13.818 "compare": false, 00:22:13.818 "compare_and_write": false, 00:22:13.818 "abort": true, 00:22:13.818 "seek_hole": false, 00:22:13.818 "seek_data": false, 00:22:13.818 "copy": true, 00:22:13.818 "nvme_iov_md": false 00:22:13.818 }, 00:22:13.818 "memory_domains": [ 00:22:13.818 { 00:22:13.818 "dma_device_id": "system", 00:22:13.818 "dma_device_type": 1 00:22:13.818 }, 00:22:13.818 { 00:22:13.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.818 "dma_device_type": 2 00:22:13.818 } 00:22:13.818 ], 00:22:13.818 "driver_specific": {} 00:22:13.818 }' 00:22:13.818 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:13.818 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:13.818 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:13.818 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:13.818 06:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:13.818 06:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:13.818 06:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:13.818 06:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:14.077 06:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:14.077 06:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:14.077 06:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:14.077 06:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:14.077 06:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:14.338 [2024-08-14 06:53:41.426044] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:14.338 [2024-08-14 06:53:41.426091] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:14.338 [2024-08-14 06:53:41.426213] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:14.338 [2024-08-14 06:53:41.426524] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:14.338 [2024-08-14 06:53:41.426552] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:22:14.338 06:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 98803 00:22:14.338 06:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 98803 ']' 00:22:14.338 06:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 98803 00:22:14.338 06:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:22:14.338 06:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:14.338 06:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98803 00:22:14.338 06:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:14.338 06:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:14.338 06:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98803' 00:22:14.338 killing process with pid 98803 00:22:14.338 06:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 98803 00:22:14.338 [2024-08-14 06:53:41.492383] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:14.338 06:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 98803 00:22:14.338 [2024-08-14 06:53:41.524975] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:14.619 06:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:14.619 00:22:14.619 real 0m30.240s 00:22:14.619 user 0m56.366s 00:22:14.619 sys 0m4.280s 00:22:14.619 06:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:14.619 06:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.619 ************************************ 00:22:14.619 END TEST raid5f_state_function_test_sb 00:22:14.619 ************************************ 00:22:14.619 06:53:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:22:14.619 06:53:41 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:14.619 06:53:41 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:14.619 06:53:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:14.619 ************************************ 00:22:14.619 START TEST raid5f_superblock_test 00:22:14.619 ************************************ 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid5f 3 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid5f 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid5f '!=' raid1 ']' 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=99741 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 99741 /var/tmp/spdk-raid.sock 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 99741 ']' 00:22:14.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:14.619 06:53:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.878 [2024-08-14 06:53:41.928761] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:22:14.878 [2024-08-14 06:53:41.928908] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99741 ] 00:22:14.878 [2024-08-14 06:53:42.077717] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.137 [2024-08-14 06:53:42.132155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.137 [2024-08-14 06:53:42.176674] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:15.137 [2024-08-14 06:53:42.176715] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:15.706 06:53:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:15.706 06:53:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:22:15.706 06:53:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:22:15.706 06:53:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:15.706 06:53:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:22:15.706 06:53:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:22:15.706 06:53:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:15.706 06:53:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:15.706 06:53:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:15.706 06:53:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:15.706 06:53:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:15.965 malloc1 00:22:15.965 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:16.224 [2024-08-14 06:53:43.358687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:16.224 [2024-08-14 06:53:43.358876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.224 [2024-08-14 06:53:43.358928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:22:16.224 [2024-08-14 06:53:43.358967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.224 [2024-08-14 06:53:43.361443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.224 [2024-08-14 06:53:43.361540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:16.224 pt1 00:22:16.224 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:16.224 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:16.224 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:22:16.224 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:22:16.224 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:16.224 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:16.224 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:16.224 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:16.224 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:16.484 malloc2 00:22:16.484 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:16.744 [2024-08-14 06:53:43.843222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:16.744 [2024-08-14 06:53:43.843393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.744 [2024-08-14 06:53:43.843439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:16.744 [2024-08-14 06:53:43.843491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.744 [2024-08-14 06:53:43.845938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.744 [2024-08-14 06:53:43.846034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:16.744 pt2 00:22:16.744 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:16.744 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:16.744 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:22:16.744 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:22:16.744 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:16.744 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:16.744 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:16.744 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:16.744 06:53:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:17.003 malloc3 00:22:17.003 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:17.262 [2024-08-14 06:53:44.342340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:17.262 [2024-08-14 06:53:44.342522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.262 [2024-08-14 06:53:44.342573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:17.262 [2024-08-14 06:53:44.342609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.262 [2024-08-14 06:53:44.345067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.262 [2024-08-14 06:53:44.345158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:17.262 pt3 00:22:17.262 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:17.262 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:17.262 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:22:17.522 [2024-08-14 06:53:44.582082] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:17.522 [2024-08-14 06:53:44.584222] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:17.522 [2024-08-14 06:53:44.584344] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:17.522 [2024-08-14 06:53:44.584569] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:22:17.522 [2024-08-14 06:53:44.584623] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:17.522 [2024-08-14 06:53:44.584981] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:22:17.522 [2024-08-14 06:53:44.585527] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:22:17.522 [2024-08-14 06:53:44.585582] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:22:17.522 [2024-08-14 06:53:44.585808] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.522 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:17.522 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:17.522 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:17.522 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:17.522 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:17.522 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:17.522 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:17.522 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:17.522 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:17.522 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:17.522 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.522 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.781 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:17.781 "name": "raid_bdev1", 00:22:17.781 "uuid": "003307a9-d75e-4477-ae5d-798e9046cde1", 00:22:17.781 "strip_size_kb": 64, 00:22:17.781 "state": "online", 00:22:17.781 "raid_level": "raid5f", 00:22:17.781 "superblock": true, 00:22:17.781 "num_base_bdevs": 3, 00:22:17.781 "num_base_bdevs_discovered": 3, 00:22:17.781 "num_base_bdevs_operational": 3, 00:22:17.781 "base_bdevs_list": [ 00:22:17.781 { 00:22:17.781 "name": "pt1", 00:22:17.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:17.781 "is_configured": true, 00:22:17.781 "data_offset": 2048, 00:22:17.781 "data_size": 63488 00:22:17.781 }, 00:22:17.781 { 00:22:17.781 "name": "pt2", 00:22:17.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.781 "is_configured": true, 00:22:17.781 "data_offset": 2048, 00:22:17.781 "data_size": 63488 00:22:17.781 }, 00:22:17.781 { 00:22:17.781 "name": "pt3", 00:22:17.781 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:17.781 "is_configured": true, 00:22:17.781 "data_offset": 2048, 00:22:17.781 "data_size": 63488 00:22:17.781 } 00:22:17.781 ] 00:22:17.781 }' 00:22:17.781 06:53:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:17.781 06:53:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.351 06:53:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:22:18.351 06:53:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:18.351 06:53:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:18.351 06:53:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:18.351 06:53:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:18.351 06:53:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:18.351 06:53:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:18.351 06:53:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:18.611 [2024-08-14 06:53:45.696488] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:18.611 06:53:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:18.611 "name": "raid_bdev1", 00:22:18.611 "aliases": [ 00:22:18.611 "003307a9-d75e-4477-ae5d-798e9046cde1" 00:22:18.611 ], 00:22:18.611 "product_name": "Raid Volume", 00:22:18.611 "block_size": 512, 00:22:18.611 "num_blocks": 126976, 00:22:18.611 "uuid": "003307a9-d75e-4477-ae5d-798e9046cde1", 00:22:18.611 "assigned_rate_limits": { 00:22:18.611 "rw_ios_per_sec": 0, 00:22:18.611 "rw_mbytes_per_sec": 0, 00:22:18.611 "r_mbytes_per_sec": 0, 00:22:18.611 "w_mbytes_per_sec": 0 00:22:18.611 }, 00:22:18.611 "claimed": false, 00:22:18.611 "zoned": false, 00:22:18.611 "supported_io_types": { 00:22:18.611 "read": true, 00:22:18.611 "write": true, 00:22:18.611 "unmap": false, 00:22:18.611 "flush": false, 00:22:18.611 "reset": true, 00:22:18.611 "nvme_admin": false, 00:22:18.611 "nvme_io": false, 00:22:18.611 "nvme_io_md": false, 00:22:18.611 "write_zeroes": true, 00:22:18.611 "zcopy": false, 00:22:18.611 "get_zone_info": false, 00:22:18.611 "zone_management": false, 00:22:18.611 "zone_append": false, 00:22:18.611 "compare": false, 00:22:18.611 "compare_and_write": false, 00:22:18.611 "abort": false, 00:22:18.611 "seek_hole": false, 00:22:18.611 "seek_data": false, 00:22:18.611 "copy": false, 00:22:18.611 "nvme_iov_md": false 00:22:18.611 }, 00:22:18.611 "driver_specific": { 00:22:18.611 "raid": { 00:22:18.611 "uuid": "003307a9-d75e-4477-ae5d-798e9046cde1", 00:22:18.611 "strip_size_kb": 64, 00:22:18.611 "state": "online", 00:22:18.611 "raid_level": "raid5f", 00:22:18.611 "superblock": true, 00:22:18.611 "num_base_bdevs": 3, 00:22:18.611 "num_base_bdevs_discovered": 3, 00:22:18.611 "num_base_bdevs_operational": 3, 00:22:18.611 "base_bdevs_list": [ 00:22:18.611 { 00:22:18.611 "name": "pt1", 00:22:18.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:18.611 "is_configured": true, 00:22:18.611 "data_offset": 2048, 00:22:18.611 "data_size": 63488 00:22:18.611 }, 00:22:18.611 { 00:22:18.611 "name": "pt2", 00:22:18.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.611 "is_configured": true, 00:22:18.611 "data_offset": 2048, 00:22:18.611 "data_size": 63488 00:22:18.611 }, 00:22:18.611 { 00:22:18.611 "name": "pt3", 00:22:18.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:18.611 "is_configured": true, 00:22:18.611 "data_offset": 2048, 00:22:18.611 "data_size": 63488 00:22:18.611 } 00:22:18.611 ] 00:22:18.611 } 00:22:18.611 } 00:22:18.611 }' 00:22:18.611 06:53:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:18.611 06:53:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:18.611 pt2 00:22:18.611 pt3' 00:22:18.611 06:53:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:18.611 06:53:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:18.611 06:53:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:18.870 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:18.870 "name": "pt1", 00:22:18.870 "aliases": [ 00:22:18.870 "00000000-0000-0000-0000-000000000001" 00:22:18.870 ], 00:22:18.870 "product_name": "passthru", 00:22:18.870 "block_size": 512, 00:22:18.870 "num_blocks": 65536, 00:22:18.870 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:18.870 "assigned_rate_limits": { 00:22:18.870 "rw_ios_per_sec": 0, 00:22:18.870 "rw_mbytes_per_sec": 0, 00:22:18.870 "r_mbytes_per_sec": 0, 00:22:18.870 "w_mbytes_per_sec": 0 00:22:18.870 }, 00:22:18.870 "claimed": true, 00:22:18.870 "claim_type": "exclusive_write", 00:22:18.870 "zoned": false, 00:22:18.870 "supported_io_types": { 00:22:18.870 "read": true, 00:22:18.870 "write": true, 00:22:18.870 "unmap": true, 00:22:18.870 "flush": true, 00:22:18.870 "reset": true, 00:22:18.870 "nvme_admin": false, 00:22:18.870 "nvme_io": false, 00:22:18.870 "nvme_io_md": false, 00:22:18.870 "write_zeroes": true, 00:22:18.870 "zcopy": true, 00:22:18.870 "get_zone_info": false, 00:22:18.870 "zone_management": false, 00:22:18.870 "zone_append": false, 00:22:18.870 "compare": false, 00:22:18.870 "compare_and_write": false, 00:22:18.870 "abort": true, 00:22:18.870 "seek_hole": false, 00:22:18.870 "seek_data": false, 00:22:18.870 "copy": true, 00:22:18.870 "nvme_iov_md": false 00:22:18.870 }, 00:22:18.870 "memory_domains": [ 00:22:18.870 { 00:22:18.870 "dma_device_id": "system", 00:22:18.870 "dma_device_type": 1 00:22:18.870 }, 00:22:18.870 { 00:22:18.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.870 "dma_device_type": 2 00:22:18.870 } 00:22:18.870 ], 00:22:18.870 "driver_specific": { 00:22:18.870 "passthru": { 00:22:18.870 "name": "pt1", 00:22:18.870 "base_bdev_name": "malloc1" 00:22:18.870 } 00:22:18.870 } 00:22:18.870 }' 00:22:18.870 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:18.870 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:18.870 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:18.870 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.130 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.130 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:19.130 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.130 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.130 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:19.130 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.130 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.390 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:19.390 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:19.390 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:19.390 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:19.390 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:19.390 "name": "pt2", 00:22:19.390 "aliases": [ 00:22:19.390 "00000000-0000-0000-0000-000000000002" 00:22:19.390 ], 00:22:19.390 "product_name": "passthru", 00:22:19.390 "block_size": 512, 00:22:19.390 "num_blocks": 65536, 00:22:19.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.390 "assigned_rate_limits": { 00:22:19.390 "rw_ios_per_sec": 0, 00:22:19.390 "rw_mbytes_per_sec": 0, 00:22:19.390 "r_mbytes_per_sec": 0, 00:22:19.390 "w_mbytes_per_sec": 0 00:22:19.390 }, 00:22:19.390 "claimed": true, 00:22:19.390 "claim_type": "exclusive_write", 00:22:19.390 "zoned": false, 00:22:19.390 "supported_io_types": { 00:22:19.390 "read": true, 00:22:19.390 "write": true, 00:22:19.390 "unmap": true, 00:22:19.390 "flush": true, 00:22:19.390 "reset": true, 00:22:19.390 "nvme_admin": false, 00:22:19.390 "nvme_io": false, 00:22:19.390 "nvme_io_md": false, 00:22:19.390 "write_zeroes": true, 00:22:19.390 "zcopy": true, 00:22:19.390 "get_zone_info": false, 00:22:19.390 "zone_management": false, 00:22:19.390 "zone_append": false, 00:22:19.390 "compare": false, 00:22:19.390 "compare_and_write": false, 00:22:19.390 "abort": true, 00:22:19.390 "seek_hole": false, 00:22:19.390 "seek_data": false, 00:22:19.390 "copy": true, 00:22:19.390 "nvme_iov_md": false 00:22:19.390 }, 00:22:19.390 "memory_domains": [ 00:22:19.390 { 00:22:19.390 "dma_device_id": "system", 00:22:19.390 "dma_device_type": 1 00:22:19.390 }, 00:22:19.390 { 00:22:19.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.390 "dma_device_type": 2 00:22:19.390 } 00:22:19.390 ], 00:22:19.390 "driver_specific": { 00:22:19.390 "passthru": { 00:22:19.390 "name": "pt2", 00:22:19.390 "base_bdev_name": "malloc2" 00:22:19.390 } 00:22:19.390 } 00:22:19.390 }' 00:22:19.390 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.649 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.649 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:19.649 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.649 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.649 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:19.649 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.649 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.910 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:19.910 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.910 06:53:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.910 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:19.910 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:19.910 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:19.910 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:20.170 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.170 "name": "pt3", 00:22:20.170 "aliases": [ 00:22:20.170 "00000000-0000-0000-0000-000000000003" 00:22:20.170 ], 00:22:20.170 "product_name": "passthru", 00:22:20.170 "block_size": 512, 00:22:20.170 "num_blocks": 65536, 00:22:20.170 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:20.170 "assigned_rate_limits": { 00:22:20.170 "rw_ios_per_sec": 0, 00:22:20.170 "rw_mbytes_per_sec": 0, 00:22:20.170 "r_mbytes_per_sec": 0, 00:22:20.170 "w_mbytes_per_sec": 0 00:22:20.170 }, 00:22:20.170 "claimed": true, 00:22:20.170 "claim_type": "exclusive_write", 00:22:20.170 "zoned": false, 00:22:20.170 "supported_io_types": { 00:22:20.170 "read": true, 00:22:20.170 "write": true, 00:22:20.170 "unmap": true, 00:22:20.170 "flush": true, 00:22:20.170 "reset": true, 00:22:20.170 "nvme_admin": false, 00:22:20.170 "nvme_io": false, 00:22:20.170 "nvme_io_md": false, 00:22:20.170 "write_zeroes": true, 00:22:20.170 "zcopy": true, 00:22:20.170 "get_zone_info": false, 00:22:20.170 "zone_management": false, 00:22:20.170 "zone_append": false, 00:22:20.170 "compare": false, 00:22:20.170 "compare_and_write": false, 00:22:20.170 "abort": true, 00:22:20.170 "seek_hole": false, 00:22:20.170 "seek_data": false, 00:22:20.170 "copy": true, 00:22:20.170 "nvme_iov_md": false 00:22:20.170 }, 00:22:20.170 "memory_domains": [ 00:22:20.170 { 00:22:20.170 "dma_device_id": "system", 00:22:20.170 "dma_device_type": 1 00:22:20.170 }, 00:22:20.170 { 00:22:20.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.170 "dma_device_type": 2 00:22:20.170 } 00:22:20.170 ], 00:22:20.170 "driver_specific": { 00:22:20.170 "passthru": { 00:22:20.170 "name": "pt3", 00:22:20.170 "base_bdev_name": "malloc3" 00:22:20.170 } 00:22:20.170 } 00:22:20.170 }' 00:22:20.170 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.170 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.170 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:20.170 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.170 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.430 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:20.430 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.430 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.430 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:20.430 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.430 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.430 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:20.430 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:20.430 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:22:20.689 [2024-08-14 06:53:47.824900] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:20.689 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=003307a9-d75e-4477-ae5d-798e9046cde1 00:22:20.689 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 003307a9-d75e-4477-ae5d-798e9046cde1 ']' 00:22:20.689 06:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:20.947 [2024-08-14 06:53:48.064349] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:20.947 [2024-08-14 06:53:48.064394] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:20.947 [2024-08-14 06:53:48.064497] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:20.947 [2024-08-14 06:53:48.064599] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:20.947 [2024-08-14 06:53:48.064612] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:22:20.947 06:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.947 06:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:22:21.205 06:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:22:21.205 06:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:22:21.205 06:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:21.205 06:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:21.463 06:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:21.463 06:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:21.721 06:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:21.721 06:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:21.721 06:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:21.721 06:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:21.980 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:22:21.980 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:21.980 06:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:22:21.980 06:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:21.980 06:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:21.980 06:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:22:21.980 06:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:21.980 06:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:22:21.980 06:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:21.980 06:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:22:21.980 06:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:21.980 06:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:21.980 06:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:22.239 [2024-08-14 06:53:49.362113] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:22.239 [2024-08-14 06:53:49.364197] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:22.239 [2024-08-14 06:53:49.364250] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:22.239 [2024-08-14 06:53:49.364302] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:22.239 [2024-08-14 06:53:49.364378] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:22.239 [2024-08-14 06:53:49.364398] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:22.239 [2024-08-14 06:53:49.364415] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:22.239 [2024-08-14 06:53:49.364426] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:22:22.239 request: 00:22:22.239 { 00:22:22.239 "name": "raid_bdev1", 00:22:22.239 "raid_level": "raid5f", 00:22:22.239 "base_bdevs": [ 00:22:22.239 "malloc1", 00:22:22.239 "malloc2", 00:22:22.239 "malloc3" 00:22:22.239 ], 00:22:22.239 "strip_size_kb": 64, 00:22:22.239 "superblock": false, 00:22:22.239 "method": "bdev_raid_create", 00:22:22.239 "req_id": 1 00:22:22.239 } 00:22:22.239 Got JSON-RPC error response 00:22:22.239 response: 00:22:22.239 { 00:22:22.239 "code": -17, 00:22:22.239 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:22.239 } 00:22:22.239 06:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:22:22.239 06:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:22:22.239 06:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:22:22.239 06:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:22:22.239 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.239 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:22:22.498 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:22:22.498 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:22:22.498 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:22.765 [2024-08-14 06:53:49.837238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:22.765 [2024-08-14 06:53:49.837314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.765 [2024-08-14 06:53:49.837334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:22.765 [2024-08-14 06:53:49.837343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.765 [2024-08-14 06:53:49.839649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.765 [2024-08-14 06:53:49.839743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:22.765 [2024-08-14 06:53:49.839838] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:22.765 [2024-08-14 06:53:49.839879] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:22.765 pt1 00:22:22.765 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:22.765 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:22.765 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:22.765 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:22.765 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:22.765 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:22.765 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:22.765 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:22.765 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:22.765 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:22.765 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.765 06:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.030 06:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:23.030 "name": "raid_bdev1", 00:22:23.030 "uuid": "003307a9-d75e-4477-ae5d-798e9046cde1", 00:22:23.030 "strip_size_kb": 64, 00:22:23.030 "state": "configuring", 00:22:23.030 "raid_level": "raid5f", 00:22:23.030 "superblock": true, 00:22:23.030 "num_base_bdevs": 3, 00:22:23.030 "num_base_bdevs_discovered": 1, 00:22:23.030 "num_base_bdevs_operational": 3, 00:22:23.030 "base_bdevs_list": [ 00:22:23.030 { 00:22:23.030 "name": "pt1", 00:22:23.030 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:23.030 "is_configured": true, 00:22:23.030 "data_offset": 2048, 00:22:23.030 "data_size": 63488 00:22:23.030 }, 00:22:23.030 { 00:22:23.030 "name": null, 00:22:23.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:23.030 "is_configured": false, 00:22:23.030 "data_offset": 2048, 00:22:23.030 "data_size": 63488 00:22:23.030 }, 00:22:23.030 { 00:22:23.030 "name": null, 00:22:23.030 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:23.030 "is_configured": false, 00:22:23.030 "data_offset": 2048, 00:22:23.030 "data_size": 63488 00:22:23.030 } 00:22:23.030 ] 00:22:23.030 }' 00:22:23.030 06:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:23.030 06:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.597 06:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:22:23.597 06:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:23.855 [2024-08-14 06:53:50.891424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:23.855 [2024-08-14 06:53:50.891600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.855 [2024-08-14 06:53:50.891662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:23.855 [2024-08-14 06:53:50.891701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.855 [2024-08-14 06:53:50.892160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.855 [2024-08-14 06:53:50.892241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:23.855 [2024-08-14 06:53:50.892357] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:23.855 [2024-08-14 06:53:50.892412] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:23.855 pt2 00:22:23.855 06:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:24.113 [2024-08-14 06:53:51.119114] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:24.113 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:24.113 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:24.113 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:24.113 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:24.113 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:24.113 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:24.113 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:24.113 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:24.113 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:24.113 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:24.113 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.113 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.371 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:24.371 "name": "raid_bdev1", 00:22:24.371 "uuid": "003307a9-d75e-4477-ae5d-798e9046cde1", 00:22:24.371 "strip_size_kb": 64, 00:22:24.371 "state": "configuring", 00:22:24.371 "raid_level": "raid5f", 00:22:24.371 "superblock": true, 00:22:24.371 "num_base_bdevs": 3, 00:22:24.371 "num_base_bdevs_discovered": 1, 00:22:24.371 "num_base_bdevs_operational": 3, 00:22:24.371 "base_bdevs_list": [ 00:22:24.371 { 00:22:24.371 "name": "pt1", 00:22:24.371 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:24.371 "is_configured": true, 00:22:24.371 "data_offset": 2048, 00:22:24.371 "data_size": 63488 00:22:24.371 }, 00:22:24.371 { 00:22:24.371 "name": null, 00:22:24.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:24.371 "is_configured": false, 00:22:24.371 "data_offset": 2048, 00:22:24.371 "data_size": 63488 00:22:24.371 }, 00:22:24.371 { 00:22:24.371 "name": null, 00:22:24.371 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:24.371 "is_configured": false, 00:22:24.371 "data_offset": 2048, 00:22:24.371 "data_size": 63488 00:22:24.371 } 00:22:24.371 ] 00:22:24.371 }' 00:22:24.371 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:24.371 06:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.939 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:22:24.939 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:24.939 06:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:24.939 [2024-08-14 06:53:52.193380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:24.939 [2024-08-14 06:53:52.193559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.939 [2024-08-14 06:53:52.193600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:24.939 [2024-08-14 06:53:52.193637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.939 [2024-08-14 06:53:52.194113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.199 [2024-08-14 06:53:52.194200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:25.199 [2024-08-14 06:53:52.194326] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:25.199 [2024-08-14 06:53:52.194387] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:25.199 pt2 00:22:25.199 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:25.199 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:25.199 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:25.199 [2024-08-14 06:53:52.425017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:25.199 [2024-08-14 06:53:52.425206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.199 [2024-08-14 06:53:52.425251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:25.199 [2024-08-14 06:53:52.425296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.199 [2024-08-14 06:53:52.425779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.199 [2024-08-14 06:53:52.425849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:25.199 [2024-08-14 06:53:52.425980] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:25.199 [2024-08-14 06:53:52.426057] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:25.199 [2024-08-14 06:53:52.426267] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:22:25.199 [2024-08-14 06:53:52.426334] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:25.199 [2024-08-14 06:53:52.426628] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:22:25.199 [2024-08-14 06:53:52.427127] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:22:25.199 [2024-08-14 06:53:52.427200] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:22:25.199 [2024-08-14 06:53:52.427373] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.199 pt3 00:22:25.199 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:25.199 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:25.199 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:25.199 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:25.199 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:25.199 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:25.199 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:25.199 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:25.199 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:25.199 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:25.199 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:25.199 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:25.459 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.459 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.459 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:25.459 "name": "raid_bdev1", 00:22:25.459 "uuid": "003307a9-d75e-4477-ae5d-798e9046cde1", 00:22:25.459 "strip_size_kb": 64, 00:22:25.459 "state": "online", 00:22:25.459 "raid_level": "raid5f", 00:22:25.459 "superblock": true, 00:22:25.459 "num_base_bdevs": 3, 00:22:25.459 "num_base_bdevs_discovered": 3, 00:22:25.459 "num_base_bdevs_operational": 3, 00:22:25.459 "base_bdevs_list": [ 00:22:25.459 { 00:22:25.459 "name": "pt1", 00:22:25.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:25.459 "is_configured": true, 00:22:25.459 "data_offset": 2048, 00:22:25.459 "data_size": 63488 00:22:25.459 }, 00:22:25.459 { 00:22:25.459 "name": "pt2", 00:22:25.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:25.459 "is_configured": true, 00:22:25.459 "data_offset": 2048, 00:22:25.459 "data_size": 63488 00:22:25.459 }, 00:22:25.459 { 00:22:25.459 "name": "pt3", 00:22:25.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:25.459 "is_configured": true, 00:22:25.459 "data_offset": 2048, 00:22:25.459 "data_size": 63488 00:22:25.459 } 00:22:25.459 ] 00:22:25.459 }' 00:22:25.459 06:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:25.459 06:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.028 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:22:26.028 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:26.028 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:26.028 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:26.028 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:26.028 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:26.028 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:26.028 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:26.287 [2024-08-14 06:53:53.491407] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:26.287 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:26.287 "name": "raid_bdev1", 00:22:26.287 "aliases": [ 00:22:26.287 "003307a9-d75e-4477-ae5d-798e9046cde1" 00:22:26.287 ], 00:22:26.287 "product_name": "Raid Volume", 00:22:26.287 "block_size": 512, 00:22:26.287 "num_blocks": 126976, 00:22:26.287 "uuid": "003307a9-d75e-4477-ae5d-798e9046cde1", 00:22:26.287 "assigned_rate_limits": { 00:22:26.287 "rw_ios_per_sec": 0, 00:22:26.287 "rw_mbytes_per_sec": 0, 00:22:26.287 "r_mbytes_per_sec": 0, 00:22:26.287 "w_mbytes_per_sec": 0 00:22:26.287 }, 00:22:26.287 "claimed": false, 00:22:26.287 "zoned": false, 00:22:26.287 "supported_io_types": { 00:22:26.287 "read": true, 00:22:26.287 "write": true, 00:22:26.287 "unmap": false, 00:22:26.287 "flush": false, 00:22:26.287 "reset": true, 00:22:26.287 "nvme_admin": false, 00:22:26.287 "nvme_io": false, 00:22:26.287 "nvme_io_md": false, 00:22:26.287 "write_zeroes": true, 00:22:26.287 "zcopy": false, 00:22:26.287 "get_zone_info": false, 00:22:26.287 "zone_management": false, 00:22:26.287 "zone_append": false, 00:22:26.287 "compare": false, 00:22:26.287 "compare_and_write": false, 00:22:26.287 "abort": false, 00:22:26.287 "seek_hole": false, 00:22:26.287 "seek_data": false, 00:22:26.287 "copy": false, 00:22:26.287 "nvme_iov_md": false 00:22:26.287 }, 00:22:26.287 "driver_specific": { 00:22:26.287 "raid": { 00:22:26.287 "uuid": "003307a9-d75e-4477-ae5d-798e9046cde1", 00:22:26.287 "strip_size_kb": 64, 00:22:26.287 "state": "online", 00:22:26.287 "raid_level": "raid5f", 00:22:26.287 "superblock": true, 00:22:26.287 "num_base_bdevs": 3, 00:22:26.287 "num_base_bdevs_discovered": 3, 00:22:26.287 "num_base_bdevs_operational": 3, 00:22:26.287 "base_bdevs_list": [ 00:22:26.287 { 00:22:26.287 "name": "pt1", 00:22:26.287 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:26.287 "is_configured": true, 00:22:26.287 "data_offset": 2048, 00:22:26.287 "data_size": 63488 00:22:26.287 }, 00:22:26.287 { 00:22:26.287 "name": "pt2", 00:22:26.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:26.287 "is_configured": true, 00:22:26.287 "data_offset": 2048, 00:22:26.287 "data_size": 63488 00:22:26.287 }, 00:22:26.287 { 00:22:26.287 "name": "pt3", 00:22:26.287 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:26.287 "is_configured": true, 00:22:26.287 "data_offset": 2048, 00:22:26.287 "data_size": 63488 00:22:26.287 } 00:22:26.287 ] 00:22:26.287 } 00:22:26.287 } 00:22:26.287 }' 00:22:26.287 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:26.546 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:26.546 pt2 00:22:26.546 pt3' 00:22:26.546 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:26.546 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:26.546 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:26.546 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:26.546 "name": "pt1", 00:22:26.546 "aliases": [ 00:22:26.546 "00000000-0000-0000-0000-000000000001" 00:22:26.546 ], 00:22:26.546 "product_name": "passthru", 00:22:26.546 "block_size": 512, 00:22:26.546 "num_blocks": 65536, 00:22:26.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:26.546 "assigned_rate_limits": { 00:22:26.546 "rw_ios_per_sec": 0, 00:22:26.546 "rw_mbytes_per_sec": 0, 00:22:26.546 "r_mbytes_per_sec": 0, 00:22:26.546 "w_mbytes_per_sec": 0 00:22:26.546 }, 00:22:26.546 "claimed": true, 00:22:26.546 "claim_type": "exclusive_write", 00:22:26.546 "zoned": false, 00:22:26.546 "supported_io_types": { 00:22:26.546 "read": true, 00:22:26.546 "write": true, 00:22:26.546 "unmap": true, 00:22:26.546 "flush": true, 00:22:26.546 "reset": true, 00:22:26.546 "nvme_admin": false, 00:22:26.546 "nvme_io": false, 00:22:26.546 "nvme_io_md": false, 00:22:26.546 "write_zeroes": true, 00:22:26.546 "zcopy": true, 00:22:26.546 "get_zone_info": false, 00:22:26.546 "zone_management": false, 00:22:26.546 "zone_append": false, 00:22:26.546 "compare": false, 00:22:26.546 "compare_and_write": false, 00:22:26.546 "abort": true, 00:22:26.546 "seek_hole": false, 00:22:26.546 "seek_data": false, 00:22:26.546 "copy": true, 00:22:26.546 "nvme_iov_md": false 00:22:26.546 }, 00:22:26.546 "memory_domains": [ 00:22:26.546 { 00:22:26.546 "dma_device_id": "system", 00:22:26.546 "dma_device_type": 1 00:22:26.546 }, 00:22:26.546 { 00:22:26.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.546 "dma_device_type": 2 00:22:26.546 } 00:22:26.546 ], 00:22:26.546 "driver_specific": { 00:22:26.546 "passthru": { 00:22:26.547 "name": "pt1", 00:22:26.547 "base_bdev_name": "malloc1" 00:22:26.547 } 00:22:26.547 } 00:22:26.547 }' 00:22:26.547 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.806 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.806 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:26.806 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.806 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.806 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:26.806 06:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.806 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.806 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:26.806 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.065 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.065 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:27.065 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:27.065 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:27.065 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:27.324 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:27.324 "name": "pt2", 00:22:27.324 "aliases": [ 00:22:27.324 "00000000-0000-0000-0000-000000000002" 00:22:27.324 ], 00:22:27.324 "product_name": "passthru", 00:22:27.324 "block_size": 512, 00:22:27.324 "num_blocks": 65536, 00:22:27.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:27.324 "assigned_rate_limits": { 00:22:27.324 "rw_ios_per_sec": 0, 00:22:27.324 "rw_mbytes_per_sec": 0, 00:22:27.324 "r_mbytes_per_sec": 0, 00:22:27.324 "w_mbytes_per_sec": 0 00:22:27.324 }, 00:22:27.324 "claimed": true, 00:22:27.324 "claim_type": "exclusive_write", 00:22:27.324 "zoned": false, 00:22:27.324 "supported_io_types": { 00:22:27.324 "read": true, 00:22:27.324 "write": true, 00:22:27.324 "unmap": true, 00:22:27.324 "flush": true, 00:22:27.324 "reset": true, 00:22:27.324 "nvme_admin": false, 00:22:27.324 "nvme_io": false, 00:22:27.324 "nvme_io_md": false, 00:22:27.324 "write_zeroes": true, 00:22:27.324 "zcopy": true, 00:22:27.324 "get_zone_info": false, 00:22:27.324 "zone_management": false, 00:22:27.324 "zone_append": false, 00:22:27.324 "compare": false, 00:22:27.324 "compare_and_write": false, 00:22:27.324 "abort": true, 00:22:27.324 "seek_hole": false, 00:22:27.324 "seek_data": false, 00:22:27.324 "copy": true, 00:22:27.324 "nvme_iov_md": false 00:22:27.324 }, 00:22:27.324 "memory_domains": [ 00:22:27.324 { 00:22:27.324 "dma_device_id": "system", 00:22:27.324 "dma_device_type": 1 00:22:27.324 }, 00:22:27.324 { 00:22:27.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.324 "dma_device_type": 2 00:22:27.324 } 00:22:27.324 ], 00:22:27.324 "driver_specific": { 00:22:27.324 "passthru": { 00:22:27.324 "name": "pt2", 00:22:27.324 "base_bdev_name": "malloc2" 00:22:27.324 } 00:22:27.324 } 00:22:27.324 }' 00:22:27.324 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.324 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.324 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:27.324 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:27.324 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:27.324 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:27.324 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:27.324 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:27.584 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:27.584 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.584 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.584 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:27.584 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:27.584 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:27.584 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:27.845 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:27.845 "name": "pt3", 00:22:27.845 "aliases": [ 00:22:27.845 "00000000-0000-0000-0000-000000000003" 00:22:27.845 ], 00:22:27.845 "product_name": "passthru", 00:22:27.845 "block_size": 512, 00:22:27.845 "num_blocks": 65536, 00:22:27.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:27.845 "assigned_rate_limits": { 00:22:27.845 "rw_ios_per_sec": 0, 00:22:27.845 "rw_mbytes_per_sec": 0, 00:22:27.845 "r_mbytes_per_sec": 0, 00:22:27.845 "w_mbytes_per_sec": 0 00:22:27.845 }, 00:22:27.845 "claimed": true, 00:22:27.845 "claim_type": "exclusive_write", 00:22:27.845 "zoned": false, 00:22:27.845 "supported_io_types": { 00:22:27.845 "read": true, 00:22:27.845 "write": true, 00:22:27.845 "unmap": true, 00:22:27.845 "flush": true, 00:22:27.845 "reset": true, 00:22:27.845 "nvme_admin": false, 00:22:27.845 "nvme_io": false, 00:22:27.845 "nvme_io_md": false, 00:22:27.845 "write_zeroes": true, 00:22:27.845 "zcopy": true, 00:22:27.845 "get_zone_info": false, 00:22:27.845 "zone_management": false, 00:22:27.845 "zone_append": false, 00:22:27.845 "compare": false, 00:22:27.845 "compare_and_write": false, 00:22:27.845 "abort": true, 00:22:27.845 "seek_hole": false, 00:22:27.845 "seek_data": false, 00:22:27.845 "copy": true, 00:22:27.845 "nvme_iov_md": false 00:22:27.845 }, 00:22:27.845 "memory_domains": [ 00:22:27.845 { 00:22:27.845 "dma_device_id": "system", 00:22:27.845 "dma_device_type": 1 00:22:27.845 }, 00:22:27.845 { 00:22:27.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.845 "dma_device_type": 2 00:22:27.845 } 00:22:27.845 ], 00:22:27.845 "driver_specific": { 00:22:27.845 "passthru": { 00:22:27.845 "name": "pt3", 00:22:27.845 "base_bdev_name": "malloc3" 00:22:27.845 } 00:22:27.845 } 00:22:27.845 }' 00:22:27.845 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.845 06:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.845 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:27.845 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:27.845 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.104 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:28.104 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.105 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.105 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:28.105 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.105 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.105 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:28.105 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:28.105 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:22:28.364 [2024-08-14 06:53:55.448273] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:28.364 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 003307a9-d75e-4477-ae5d-798e9046cde1 '!=' 003307a9-d75e-4477-ae5d-798e9046cde1 ']' 00:22:28.364 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid5f 00:22:28.364 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:28.364 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:28.364 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:28.624 [2024-08-14 06:53:55.655765] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:28.624 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:28.624 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:28.624 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:28.624 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:28.624 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:28.624 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:28.624 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.624 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.624 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.624 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.624 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.624 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.883 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.883 "name": "raid_bdev1", 00:22:28.883 "uuid": "003307a9-d75e-4477-ae5d-798e9046cde1", 00:22:28.883 "strip_size_kb": 64, 00:22:28.883 "state": "online", 00:22:28.883 "raid_level": "raid5f", 00:22:28.883 "superblock": true, 00:22:28.883 "num_base_bdevs": 3, 00:22:28.883 "num_base_bdevs_discovered": 2, 00:22:28.883 "num_base_bdevs_operational": 2, 00:22:28.883 "base_bdevs_list": [ 00:22:28.883 { 00:22:28.883 "name": null, 00:22:28.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.883 "is_configured": false, 00:22:28.883 "data_offset": 2048, 00:22:28.883 "data_size": 63488 00:22:28.883 }, 00:22:28.883 { 00:22:28.883 "name": "pt2", 00:22:28.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:28.883 "is_configured": true, 00:22:28.883 "data_offset": 2048, 00:22:28.883 "data_size": 63488 00:22:28.883 }, 00:22:28.883 { 00:22:28.883 "name": "pt3", 00:22:28.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:28.883 "is_configured": true, 00:22:28.884 "data_offset": 2048, 00:22:28.884 "data_size": 63488 00:22:28.884 } 00:22:28.884 ] 00:22:28.884 }' 00:22:28.884 06:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.884 06:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.454 06:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:29.715 [2024-08-14 06:53:56.773920] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:29.715 [2024-08-14 06:53:56.773964] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:29.715 [2024-08-14 06:53:56.774062] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:29.715 [2024-08-14 06:53:56.774131] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:29.715 [2024-08-14 06:53:56.774145] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:22:29.715 06:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:22:29.715 06:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.974 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:22:29.974 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:22:29.974 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:29.974 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:22:29.974 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:30.233 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:30.233 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:22:30.233 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:30.492 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:30.492 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:22:30.492 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:22:30.492 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:22:30.492 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:30.492 [2024-08-14 06:53:57.740227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:30.492 [2024-08-14 06:53:57.740312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.492 [2024-08-14 06:53:57.740333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:30.492 [2024-08-14 06:53:57.740344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.493 [2024-08-14 06:53:57.742723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.493 [2024-08-14 06:53:57.742852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:30.493 [2024-08-14 06:53:57.742957] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:30.493 [2024-08-14 06:53:57.743018] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:30.493 pt2 00:22:30.752 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:30.752 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:30.752 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:30.752 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:30.752 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:30.752 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:30.752 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.752 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.752 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.752 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.752 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.752 06:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.752 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:30.752 "name": "raid_bdev1", 00:22:30.752 "uuid": "003307a9-d75e-4477-ae5d-798e9046cde1", 00:22:30.752 "strip_size_kb": 64, 00:22:30.752 "state": "configuring", 00:22:30.752 "raid_level": "raid5f", 00:22:30.753 "superblock": true, 00:22:30.753 "num_base_bdevs": 3, 00:22:30.753 "num_base_bdevs_discovered": 1, 00:22:30.753 "num_base_bdevs_operational": 2, 00:22:30.753 "base_bdevs_list": [ 00:22:30.753 { 00:22:30.753 "name": null, 00:22:30.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.753 "is_configured": false, 00:22:30.753 "data_offset": 2048, 00:22:30.753 "data_size": 63488 00:22:30.753 }, 00:22:30.753 { 00:22:30.753 "name": "pt2", 00:22:30.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:30.753 "is_configured": true, 00:22:30.753 "data_offset": 2048, 00:22:30.753 "data_size": 63488 00:22:30.753 }, 00:22:30.753 { 00:22:30.753 "name": null, 00:22:30.753 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:30.753 "is_configured": false, 00:22:30.753 "data_offset": 2048, 00:22:30.753 "data_size": 63488 00:22:30.753 } 00:22:30.753 ] 00:22:30.753 }' 00:22:30.753 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:30.753 06:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:31.689 [2024-08-14 06:53:58.810466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:31.689 [2024-08-14 06:53:58.810562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.689 [2024-08-14 06:53:58.810586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:31.689 [2024-08-14 06:53:58.810599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.689 [2024-08-14 06:53:58.811051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.689 [2024-08-14 06:53:58.811077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:31.689 [2024-08-14 06:53:58.811173] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:31.689 [2024-08-14 06:53:58.811225] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:31.689 [2024-08-14 06:53:58.811345] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:22:31.689 [2024-08-14 06:53:58.811359] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:31.689 [2024-08-14 06:53:58.811629] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:22:31.689 [2024-08-14 06:53:58.812201] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:22:31.689 [2024-08-14 06:53:58.812218] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:22:31.689 [2024-08-14 06:53:58.812528] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:31.689 pt3 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.689 06:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.948 06:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:31.948 "name": "raid_bdev1", 00:22:31.948 "uuid": "003307a9-d75e-4477-ae5d-798e9046cde1", 00:22:31.948 "strip_size_kb": 64, 00:22:31.948 "state": "online", 00:22:31.948 "raid_level": "raid5f", 00:22:31.948 "superblock": true, 00:22:31.948 "num_base_bdevs": 3, 00:22:31.948 "num_base_bdevs_discovered": 2, 00:22:31.948 "num_base_bdevs_operational": 2, 00:22:31.948 "base_bdevs_list": [ 00:22:31.948 { 00:22:31.948 "name": null, 00:22:31.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.948 "is_configured": false, 00:22:31.948 "data_offset": 2048, 00:22:31.948 "data_size": 63488 00:22:31.948 }, 00:22:31.948 { 00:22:31.948 "name": "pt2", 00:22:31.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:31.948 "is_configured": true, 00:22:31.948 "data_offset": 2048, 00:22:31.948 "data_size": 63488 00:22:31.948 }, 00:22:31.948 { 00:22:31.948 "name": "pt3", 00:22:31.948 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:31.948 "is_configured": true, 00:22:31.948 "data_offset": 2048, 00:22:31.948 "data_size": 63488 00:22:31.948 } 00:22:31.948 ] 00:22:31.948 }' 00:22:31.948 06:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:31.948 06:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.885 06:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:32.885 [2024-08-14 06:54:00.016481] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:32.885 [2024-08-14 06:54:00.016615] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:32.885 [2024-08-14 06:54:00.016728] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:32.885 [2024-08-14 06:54:00.016820] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:32.885 [2024-08-14 06:54:00.016859] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:22:32.885 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:22:32.885 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.143 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:22:33.144 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:22:33.144 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:22:33.144 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:22:33.144 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:33.402 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:33.661 [2024-08-14 06:54:00.731292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:33.661 [2024-08-14 06:54:00.731413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.661 [2024-08-14 06:54:00.731454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:33.661 [2024-08-14 06:54:00.731466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.661 [2024-08-14 06:54:00.734165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.661 [2024-08-14 06:54:00.734225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:33.661 [2024-08-14 06:54:00.734329] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:33.661 [2024-08-14 06:54:00.734382] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:33.661 [2024-08-14 06:54:00.734543] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:33.661 [2024-08-14 06:54:00.734556] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:33.661 [2024-08-14 06:54:00.734590] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:22:33.661 [2024-08-14 06:54:00.734668] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:33.661 pt1 00:22:33.661 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:22:33.661 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:33.661 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:33.661 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:33.661 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:33.661 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:33.661 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:33.661 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.661 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.661 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.661 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.661 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.661 06:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.920 06:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.920 "name": "raid_bdev1", 00:22:33.920 "uuid": "003307a9-d75e-4477-ae5d-798e9046cde1", 00:22:33.920 "strip_size_kb": 64, 00:22:33.920 "state": "configuring", 00:22:33.920 "raid_level": "raid5f", 00:22:33.920 "superblock": true, 00:22:33.920 "num_base_bdevs": 3, 00:22:33.920 "num_base_bdevs_discovered": 1, 00:22:33.920 "num_base_bdevs_operational": 2, 00:22:33.920 "base_bdevs_list": [ 00:22:33.920 { 00:22:33.920 "name": null, 00:22:33.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.920 "is_configured": false, 00:22:33.920 "data_offset": 2048, 00:22:33.920 "data_size": 63488 00:22:33.920 }, 00:22:33.920 { 00:22:33.920 "name": "pt2", 00:22:33.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:33.920 "is_configured": true, 00:22:33.920 "data_offset": 2048, 00:22:33.920 "data_size": 63488 00:22:33.920 }, 00:22:33.920 { 00:22:33.920 "name": null, 00:22:33.920 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:33.920 "is_configured": false, 00:22:33.920 "data_offset": 2048, 00:22:33.920 "data_size": 63488 00:22:33.920 } 00:22:33.920 ] 00:22:33.920 }' 00:22:33.920 06:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.920 06:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.486 06:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:34.486 06:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:22:34.745 06:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:22:34.745 06:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:35.004 [2024-08-14 06:54:02.141208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:35.004 [2024-08-14 06:54:02.141293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.004 [2024-08-14 06:54:02.141317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:35.004 [2024-08-14 06:54:02.141329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.004 [2024-08-14 06:54:02.141831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.004 [2024-08-14 06:54:02.141859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:35.004 [2024-08-14 06:54:02.141969] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:35.004 [2024-08-14 06:54:02.142008] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:35.004 [2024-08-14 06:54:02.142135] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:22:35.004 [2024-08-14 06:54:02.142145] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:35.004 [2024-08-14 06:54:02.142454] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:22:35.004 [2024-08-14 06:54:02.142990] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:22:35.004 [2024-08-14 06:54:02.143015] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:22:35.004 [2024-08-14 06:54:02.143213] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.004 pt3 00:22:35.004 06:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:35.004 06:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:35.004 06:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:35.004 06:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:35.004 06:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:35.004 06:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:35.004 06:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:35.004 06:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:35.004 06:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:35.004 06:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:35.004 06:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.004 06:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.262 06:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:35.262 "name": "raid_bdev1", 00:22:35.262 "uuid": "003307a9-d75e-4477-ae5d-798e9046cde1", 00:22:35.262 "strip_size_kb": 64, 00:22:35.262 "state": "online", 00:22:35.262 "raid_level": "raid5f", 00:22:35.262 "superblock": true, 00:22:35.262 "num_base_bdevs": 3, 00:22:35.262 "num_base_bdevs_discovered": 2, 00:22:35.262 "num_base_bdevs_operational": 2, 00:22:35.262 "base_bdevs_list": [ 00:22:35.262 { 00:22:35.262 "name": null, 00:22:35.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.262 "is_configured": false, 00:22:35.262 "data_offset": 2048, 00:22:35.262 "data_size": 63488 00:22:35.262 }, 00:22:35.262 { 00:22:35.262 "name": "pt2", 00:22:35.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:35.262 "is_configured": true, 00:22:35.262 "data_offset": 2048, 00:22:35.262 "data_size": 63488 00:22:35.262 }, 00:22:35.262 { 00:22:35.262 "name": "pt3", 00:22:35.262 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:35.262 "is_configured": true, 00:22:35.262 "data_offset": 2048, 00:22:35.262 "data_size": 63488 00:22:35.262 } 00:22:35.262 ] 00:22:35.262 }' 00:22:35.262 06:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:35.262 06:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.880 06:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:35.880 06:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:22:36.138 06:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:22:36.138 06:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:36.138 06:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:22:36.395 [2024-08-14 06:54:03.523274] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:36.395 06:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 003307a9-d75e-4477-ae5d-798e9046cde1 '!=' 003307a9-d75e-4477-ae5d-798e9046cde1 ']' 00:22:36.395 06:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 99741 00:22:36.395 06:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 99741 ']' 00:22:36.395 06:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # kill -0 99741 00:22:36.395 06:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # uname 00:22:36.396 06:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:36.396 06:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99741 00:22:36.396 06:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:36.396 06:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:36.396 06:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99741' 00:22:36.396 killing process with pid 99741 00:22:36.396 06:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@965 -- # kill 99741 00:22:36.396 [2024-08-14 06:54:03.586707] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:36.396 06:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # wait 99741 00:22:36.396 [2024-08-14 06:54:03.586915] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:36.396 [2024-08-14 06:54:03.587037] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:36.396 [2024-08-14 06:54:03.587102] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:22:36.396 [2024-08-14 06:54:03.623101] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:36.652 06:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:22:36.652 ************************************ 00:22:36.652 END TEST raid5f_superblock_test 00:22:36.652 ************************************ 00:22:36.652 00:22:36.652 real 0m22.020s 00:22:36.652 user 0m40.798s 00:22:36.652 sys 0m3.356s 00:22:36.652 06:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:36.652 06:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.909 06:54:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # '[' true = true ']' 00:22:36.909 06:54:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:22:36.909 06:54:03 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:22:36.909 06:54:03 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:36.909 06:54:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:36.909 ************************************ 00:22:36.909 START TEST raid5f_rebuild_test 00:22:36.909 ************************************ 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 3 false false true 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=3 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:22:36.909 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=100446 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 100446 /var/tmp/spdk-raid.sock 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 100446 ']' 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:36.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:36.910 06:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.910 [2024-08-14 06:54:04.034273] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:22:36.910 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:36.910 Zero copy mechanism will not be used. 00:22:36.910 [2024-08-14 06:54:04.034974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100446 ] 00:22:37.169 [2024-08-14 06:54:04.184519] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.169 [2024-08-14 06:54:04.236156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.169 [2024-08-14 06:54:04.279642] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:37.169 [2024-08-14 06:54:04.279761] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:37.735 06:54:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:37.735 06:54:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:22:37.735 06:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:37.735 06:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:37.993 BaseBdev1_malloc 00:22:37.993 06:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:38.251 [2024-08-14 06:54:05.356808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:38.251 [2024-08-14 06:54:05.356995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.251 [2024-08-14 06:54:05.357046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:22:38.251 [2024-08-14 06:54:05.357090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.251 [2024-08-14 06:54:05.359660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.251 [2024-08-14 06:54:05.359780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:38.251 BaseBdev1 00:22:38.251 06:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:38.251 06:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:38.510 BaseBdev2_malloc 00:22:38.510 06:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:38.768 [2024-08-14 06:54:05.828902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:38.768 [2024-08-14 06:54:05.829073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.769 [2024-08-14 06:54:05.829117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:38.769 [2024-08-14 06:54:05.829149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.769 [2024-08-14 06:54:05.831543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.769 [2024-08-14 06:54:05.831635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:38.769 BaseBdev2 00:22:38.769 06:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:38.769 06:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:39.028 BaseBdev3_malloc 00:22:39.028 06:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:39.286 [2024-08-14 06:54:06.313509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:39.286 [2024-08-14 06:54:06.313595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.286 [2024-08-14 06:54:06.313625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:39.286 [2024-08-14 06:54:06.313637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.286 [2024-08-14 06:54:06.316096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.286 [2024-08-14 06:54:06.316200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:39.286 BaseBdev3 00:22:39.286 06:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:39.546 spare_malloc 00:22:39.546 06:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:39.804 spare_delay 00:22:39.804 06:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:39.804 [2024-08-14 06:54:07.041756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:39.804 [2024-08-14 06:54:07.041848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.804 [2024-08-14 06:54:07.041878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:39.804 [2024-08-14 06:54:07.041893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.804 [2024-08-14 06:54:07.044468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.804 [2024-08-14 06:54:07.044517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:39.804 spare 00:22:40.063 06:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:22:40.063 [2024-08-14 06:54:07.293465] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:40.063 [2024-08-14 06:54:07.295578] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:40.063 [2024-08-14 06:54:07.295721] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:40.063 [2024-08-14 06:54:07.295839] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:22:40.063 [2024-08-14 06:54:07.295860] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:40.063 [2024-08-14 06:54:07.296250] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:22:40.063 [2024-08-14 06:54:07.296711] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:22:40.063 [2024-08-14 06:54:07.296727] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:22:40.063 [2024-08-14 06:54:07.296911] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.063 06:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:40.063 06:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:40.063 06:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:40.063 06:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:40.063 06:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:40.063 06:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:40.063 06:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:40.063 06:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:40.063 06:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:40.063 06:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:40.323 06:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.323 06:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.323 06:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:40.323 "name": "raid_bdev1", 00:22:40.323 "uuid": "994115f2-2da0-4627-a8e0-d3e58ecc538f", 00:22:40.323 "strip_size_kb": 64, 00:22:40.323 "state": "online", 00:22:40.323 "raid_level": "raid5f", 00:22:40.323 "superblock": false, 00:22:40.323 "num_base_bdevs": 3, 00:22:40.323 "num_base_bdevs_discovered": 3, 00:22:40.323 "num_base_bdevs_operational": 3, 00:22:40.323 "base_bdevs_list": [ 00:22:40.323 { 00:22:40.323 "name": "BaseBdev1", 00:22:40.323 "uuid": "d680d72b-ea6f-507e-b21a-39ccefbed354", 00:22:40.323 "is_configured": true, 00:22:40.323 "data_offset": 0, 00:22:40.323 "data_size": 65536 00:22:40.323 }, 00:22:40.323 { 00:22:40.323 "name": "BaseBdev2", 00:22:40.323 "uuid": "7697eb4a-a2f7-5e1b-b912-605783e41234", 00:22:40.323 "is_configured": true, 00:22:40.323 "data_offset": 0, 00:22:40.323 "data_size": 65536 00:22:40.323 }, 00:22:40.323 { 00:22:40.323 "name": "BaseBdev3", 00:22:40.323 "uuid": "5ce5525b-2455-5363-97ad-0dc5d113c738", 00:22:40.323 "is_configured": true, 00:22:40.323 "data_offset": 0, 00:22:40.323 "data_size": 65536 00:22:40.323 } 00:22:40.323 ] 00:22:40.323 }' 00:22:40.323 06:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:40.323 06:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.263 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:22:41.263 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:41.263 [2024-08-14 06:54:08.388799] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:41.263 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=131072 00:22:41.263 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.263 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:41.522 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:22:41.522 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:22:41.522 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:22:41.522 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:22:41.522 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:41.522 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:41.522 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:41.522 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:41.522 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:41.522 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:41.522 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:41.522 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:41.522 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:41.522 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:41.781 [2024-08-14 06:54:08.887753] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:22:41.781 /dev/nbd0 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:41.781 1+0 records in 00:22:41.781 1+0 records out 00:22:41.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273366 s, 15.0 MB/s 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:22:41.781 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:41.782 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:41.782 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:22:41.782 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # write_unit_size=256 00:22:41.782 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # echo 128 00:22:41.782 06:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:22:42.351 512+0 records in 00:22:42.351 512+0 records out 00:22:42.351 67108864 bytes (67 MB, 64 MiB) copied, 0.358738 s, 187 MB/s 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:42.351 [2024-08-14 06:54:09.545632] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:42.351 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:42.611 [2024-08-14 06:54:09.770832] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:42.611 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:42.611 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:42.611 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:42.611 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:42.611 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:42.611 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:42.611 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:42.611 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:42.611 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:42.611 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:42.611 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.611 06:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.871 06:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:42.871 "name": "raid_bdev1", 00:22:42.871 "uuid": "994115f2-2da0-4627-a8e0-d3e58ecc538f", 00:22:42.871 "strip_size_kb": 64, 00:22:42.871 "state": "online", 00:22:42.871 "raid_level": "raid5f", 00:22:42.871 "superblock": false, 00:22:42.871 "num_base_bdevs": 3, 00:22:42.871 "num_base_bdevs_discovered": 2, 00:22:42.871 "num_base_bdevs_operational": 2, 00:22:42.871 "base_bdevs_list": [ 00:22:42.871 { 00:22:42.871 "name": null, 00:22:42.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.871 "is_configured": false, 00:22:42.871 "data_offset": 0, 00:22:42.871 "data_size": 65536 00:22:42.871 }, 00:22:42.871 { 00:22:42.871 "name": "BaseBdev2", 00:22:42.871 "uuid": "7697eb4a-a2f7-5e1b-b912-605783e41234", 00:22:42.871 "is_configured": true, 00:22:42.871 "data_offset": 0, 00:22:42.871 "data_size": 65536 00:22:42.871 }, 00:22:42.871 { 00:22:42.871 "name": "BaseBdev3", 00:22:42.871 "uuid": "5ce5525b-2455-5363-97ad-0dc5d113c738", 00:22:42.871 "is_configured": true, 00:22:42.871 "data_offset": 0, 00:22:42.871 "data_size": 65536 00:22:42.871 } 00:22:42.871 ] 00:22:42.871 }' 00:22:42.871 06:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:42.871 06:54:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.439 06:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:43.698 [2024-08-14 06:54:10.873179] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:43.698 [2024-08-14 06:54:10.877579] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027cd0 00:22:43.698 [2024-08-14 06:54:10.880300] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:43.698 06:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:45.077 06:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:45.077 06:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:45.077 06:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:45.077 06:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:45.077 06:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:45.077 06:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.077 06:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.077 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:45.077 "name": "raid_bdev1", 00:22:45.077 "uuid": "994115f2-2da0-4627-a8e0-d3e58ecc538f", 00:22:45.077 "strip_size_kb": 64, 00:22:45.077 "state": "online", 00:22:45.077 "raid_level": "raid5f", 00:22:45.077 "superblock": false, 00:22:45.077 "num_base_bdevs": 3, 00:22:45.077 "num_base_bdevs_discovered": 3, 00:22:45.077 "num_base_bdevs_operational": 3, 00:22:45.077 "process": { 00:22:45.077 "type": "rebuild", 00:22:45.077 "target": "spare", 00:22:45.077 "progress": { 00:22:45.077 "blocks": 24576, 00:22:45.077 "percent": 18 00:22:45.077 } 00:22:45.077 }, 00:22:45.077 "base_bdevs_list": [ 00:22:45.077 { 00:22:45.077 "name": "spare", 00:22:45.077 "uuid": "8eafe587-1db8-516e-bdf0-184d1fb731a6", 00:22:45.077 "is_configured": true, 00:22:45.077 "data_offset": 0, 00:22:45.077 "data_size": 65536 00:22:45.077 }, 00:22:45.077 { 00:22:45.077 "name": "BaseBdev2", 00:22:45.077 "uuid": "7697eb4a-a2f7-5e1b-b912-605783e41234", 00:22:45.077 "is_configured": true, 00:22:45.077 "data_offset": 0, 00:22:45.077 "data_size": 65536 00:22:45.077 }, 00:22:45.077 { 00:22:45.077 "name": "BaseBdev3", 00:22:45.077 "uuid": "5ce5525b-2455-5363-97ad-0dc5d113c738", 00:22:45.077 "is_configured": true, 00:22:45.077 "data_offset": 0, 00:22:45.077 "data_size": 65536 00:22:45.077 } 00:22:45.077 ] 00:22:45.077 }' 00:22:45.077 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:45.077 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:45.077 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:45.077 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:45.077 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:45.335 [2024-08-14 06:54:12.476669] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:45.335 [2024-08-14 06:54:12.496387] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:45.335 [2024-08-14 06:54:12.496483] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.335 [2024-08-14 06:54:12.496506] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:45.335 [2024-08-14 06:54:12.496515] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:45.336 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:45.336 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:45.336 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:45.336 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:45.336 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:45.336 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:45.336 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:45.336 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:45.336 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:45.336 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:45.336 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.336 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.594 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.594 "name": "raid_bdev1", 00:22:45.594 "uuid": "994115f2-2da0-4627-a8e0-d3e58ecc538f", 00:22:45.594 "strip_size_kb": 64, 00:22:45.594 "state": "online", 00:22:45.594 "raid_level": "raid5f", 00:22:45.594 "superblock": false, 00:22:45.594 "num_base_bdevs": 3, 00:22:45.594 "num_base_bdevs_discovered": 2, 00:22:45.594 "num_base_bdevs_operational": 2, 00:22:45.594 "base_bdevs_list": [ 00:22:45.594 { 00:22:45.594 "name": null, 00:22:45.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.594 "is_configured": false, 00:22:45.594 "data_offset": 0, 00:22:45.594 "data_size": 65536 00:22:45.594 }, 00:22:45.594 { 00:22:45.594 "name": "BaseBdev2", 00:22:45.594 "uuid": "7697eb4a-a2f7-5e1b-b912-605783e41234", 00:22:45.594 "is_configured": true, 00:22:45.594 "data_offset": 0, 00:22:45.594 "data_size": 65536 00:22:45.594 }, 00:22:45.594 { 00:22:45.594 "name": "BaseBdev3", 00:22:45.594 "uuid": "5ce5525b-2455-5363-97ad-0dc5d113c738", 00:22:45.594 "is_configured": true, 00:22:45.594 "data_offset": 0, 00:22:45.594 "data_size": 65536 00:22:45.594 } 00:22:45.594 ] 00:22:45.594 }' 00:22:45.594 06:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.594 06:54:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.162 06:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:46.162 06:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:46.162 06:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:46.162 06:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:46.162 06:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:46.162 06:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.162 06:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.421 06:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:46.421 "name": "raid_bdev1", 00:22:46.421 "uuid": "994115f2-2da0-4627-a8e0-d3e58ecc538f", 00:22:46.421 "strip_size_kb": 64, 00:22:46.421 "state": "online", 00:22:46.421 "raid_level": "raid5f", 00:22:46.421 "superblock": false, 00:22:46.421 "num_base_bdevs": 3, 00:22:46.421 "num_base_bdevs_discovered": 2, 00:22:46.421 "num_base_bdevs_operational": 2, 00:22:46.421 "base_bdevs_list": [ 00:22:46.421 { 00:22:46.421 "name": null, 00:22:46.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.421 "is_configured": false, 00:22:46.421 "data_offset": 0, 00:22:46.421 "data_size": 65536 00:22:46.421 }, 00:22:46.421 { 00:22:46.421 "name": "BaseBdev2", 00:22:46.421 "uuid": "7697eb4a-a2f7-5e1b-b912-605783e41234", 00:22:46.421 "is_configured": true, 00:22:46.421 "data_offset": 0, 00:22:46.421 "data_size": 65536 00:22:46.421 }, 00:22:46.421 { 00:22:46.421 "name": "BaseBdev3", 00:22:46.421 "uuid": "5ce5525b-2455-5363-97ad-0dc5d113c738", 00:22:46.421 "is_configured": true, 00:22:46.421 "data_offset": 0, 00:22:46.421 "data_size": 65536 00:22:46.421 } 00:22:46.421 ] 00:22:46.421 }' 00:22:46.421 06:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:46.421 06:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:46.421 06:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:46.680 06:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:46.680 06:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:46.680 [2024-08-14 06:54:13.932386] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:46.941 [2024-08-14 06:54:13.936289] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:22:46.941 [2024-08-14 06:54:13.938669] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:46.941 06:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:22:47.880 06:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:47.880 06:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:47.880 06:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:47.880 06:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:47.880 06:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:47.880 06:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.880 06:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:48.139 "name": "raid_bdev1", 00:22:48.139 "uuid": "994115f2-2da0-4627-a8e0-d3e58ecc538f", 00:22:48.139 "strip_size_kb": 64, 00:22:48.139 "state": "online", 00:22:48.139 "raid_level": "raid5f", 00:22:48.139 "superblock": false, 00:22:48.139 "num_base_bdevs": 3, 00:22:48.139 "num_base_bdevs_discovered": 3, 00:22:48.139 "num_base_bdevs_operational": 3, 00:22:48.139 "process": { 00:22:48.139 "type": "rebuild", 00:22:48.139 "target": "spare", 00:22:48.139 "progress": { 00:22:48.139 "blocks": 24576, 00:22:48.139 "percent": 18 00:22:48.139 } 00:22:48.139 }, 00:22:48.139 "base_bdevs_list": [ 00:22:48.139 { 00:22:48.139 "name": "spare", 00:22:48.139 "uuid": "8eafe587-1db8-516e-bdf0-184d1fb731a6", 00:22:48.139 "is_configured": true, 00:22:48.139 "data_offset": 0, 00:22:48.139 "data_size": 65536 00:22:48.139 }, 00:22:48.139 { 00:22:48.139 "name": "BaseBdev2", 00:22:48.139 "uuid": "7697eb4a-a2f7-5e1b-b912-605783e41234", 00:22:48.139 "is_configured": true, 00:22:48.139 "data_offset": 0, 00:22:48.139 "data_size": 65536 00:22:48.139 }, 00:22:48.139 { 00:22:48.139 "name": "BaseBdev3", 00:22:48.139 "uuid": "5ce5525b-2455-5363-97ad-0dc5d113c738", 00:22:48.139 "is_configured": true, 00:22:48.139 "data_offset": 0, 00:22:48.139 "data_size": 65536 00:22:48.139 } 00:22:48.139 ] 00:22:48.139 }' 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=3 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=1014 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.139 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.397 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:48.397 "name": "raid_bdev1", 00:22:48.397 "uuid": "994115f2-2da0-4627-a8e0-d3e58ecc538f", 00:22:48.397 "strip_size_kb": 64, 00:22:48.397 "state": "online", 00:22:48.397 "raid_level": "raid5f", 00:22:48.397 "superblock": false, 00:22:48.397 "num_base_bdevs": 3, 00:22:48.397 "num_base_bdevs_discovered": 3, 00:22:48.397 "num_base_bdevs_operational": 3, 00:22:48.397 "process": { 00:22:48.397 "type": "rebuild", 00:22:48.397 "target": "spare", 00:22:48.397 "progress": { 00:22:48.397 "blocks": 30720, 00:22:48.397 "percent": 23 00:22:48.397 } 00:22:48.397 }, 00:22:48.397 "base_bdevs_list": [ 00:22:48.397 { 00:22:48.397 "name": "spare", 00:22:48.397 "uuid": "8eafe587-1db8-516e-bdf0-184d1fb731a6", 00:22:48.397 "is_configured": true, 00:22:48.397 "data_offset": 0, 00:22:48.397 "data_size": 65536 00:22:48.397 }, 00:22:48.397 { 00:22:48.397 "name": "BaseBdev2", 00:22:48.397 "uuid": "7697eb4a-a2f7-5e1b-b912-605783e41234", 00:22:48.397 "is_configured": true, 00:22:48.397 "data_offset": 0, 00:22:48.397 "data_size": 65536 00:22:48.397 }, 00:22:48.397 { 00:22:48.397 "name": "BaseBdev3", 00:22:48.397 "uuid": "5ce5525b-2455-5363-97ad-0dc5d113c738", 00:22:48.397 "is_configured": true, 00:22:48.397 "data_offset": 0, 00:22:48.397 "data_size": 65536 00:22:48.397 } 00:22:48.397 ] 00:22:48.397 }' 00:22:48.397 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:48.397 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.397 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:48.397 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.397 06:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:22:49.775 06:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:49.775 06:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.775 06:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:49.775 06:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:49.775 06:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:49.775 06:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:49.775 06:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.775 06:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.775 06:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:49.775 "name": "raid_bdev1", 00:22:49.775 "uuid": "994115f2-2da0-4627-a8e0-d3e58ecc538f", 00:22:49.775 "strip_size_kb": 64, 00:22:49.775 "state": "online", 00:22:49.775 "raid_level": "raid5f", 00:22:49.775 "superblock": false, 00:22:49.775 "num_base_bdevs": 3, 00:22:49.775 "num_base_bdevs_discovered": 3, 00:22:49.775 "num_base_bdevs_operational": 3, 00:22:49.775 "process": { 00:22:49.775 "type": "rebuild", 00:22:49.775 "target": "spare", 00:22:49.775 "progress": { 00:22:49.775 "blocks": 59392, 00:22:49.775 "percent": 45 00:22:49.775 } 00:22:49.775 }, 00:22:49.775 "base_bdevs_list": [ 00:22:49.775 { 00:22:49.775 "name": "spare", 00:22:49.775 "uuid": "8eafe587-1db8-516e-bdf0-184d1fb731a6", 00:22:49.775 "is_configured": true, 00:22:49.775 "data_offset": 0, 00:22:49.775 "data_size": 65536 00:22:49.775 }, 00:22:49.775 { 00:22:49.775 "name": "BaseBdev2", 00:22:49.775 "uuid": "7697eb4a-a2f7-5e1b-b912-605783e41234", 00:22:49.775 "is_configured": true, 00:22:49.775 "data_offset": 0, 00:22:49.775 "data_size": 65536 00:22:49.775 }, 00:22:49.775 { 00:22:49.775 "name": "BaseBdev3", 00:22:49.775 "uuid": "5ce5525b-2455-5363-97ad-0dc5d113c738", 00:22:49.775 "is_configured": true, 00:22:49.775 "data_offset": 0, 00:22:49.775 "data_size": 65536 00:22:49.775 } 00:22:49.775 ] 00:22:49.775 }' 00:22:49.775 06:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:49.775 06:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.775 06:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:49.775 06:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.775 06:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:22:51.150 06:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:51.150 06:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.150 06:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:51.150 06:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:51.150 06:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:51.150 06:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:51.150 06:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.150 06:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.150 06:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:51.150 "name": "raid_bdev1", 00:22:51.150 "uuid": "994115f2-2da0-4627-a8e0-d3e58ecc538f", 00:22:51.150 "strip_size_kb": 64, 00:22:51.150 "state": "online", 00:22:51.150 "raid_level": "raid5f", 00:22:51.150 "superblock": false, 00:22:51.150 "num_base_bdevs": 3, 00:22:51.150 "num_base_bdevs_discovered": 3, 00:22:51.150 "num_base_bdevs_operational": 3, 00:22:51.150 "process": { 00:22:51.150 "type": "rebuild", 00:22:51.150 "target": "spare", 00:22:51.150 "progress": { 00:22:51.150 "blocks": 86016, 00:22:51.150 "percent": 65 00:22:51.150 } 00:22:51.150 }, 00:22:51.150 "base_bdevs_list": [ 00:22:51.150 { 00:22:51.150 "name": "spare", 00:22:51.150 "uuid": "8eafe587-1db8-516e-bdf0-184d1fb731a6", 00:22:51.150 "is_configured": true, 00:22:51.150 "data_offset": 0, 00:22:51.150 "data_size": 65536 00:22:51.150 }, 00:22:51.150 { 00:22:51.150 "name": "BaseBdev2", 00:22:51.150 "uuid": "7697eb4a-a2f7-5e1b-b912-605783e41234", 00:22:51.150 "is_configured": true, 00:22:51.150 "data_offset": 0, 00:22:51.150 "data_size": 65536 00:22:51.150 }, 00:22:51.150 { 00:22:51.150 "name": "BaseBdev3", 00:22:51.150 "uuid": "5ce5525b-2455-5363-97ad-0dc5d113c738", 00:22:51.150 "is_configured": true, 00:22:51.150 "data_offset": 0, 00:22:51.150 "data_size": 65536 00:22:51.150 } 00:22:51.150 ] 00:22:51.150 }' 00:22:51.150 06:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:51.150 06:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.150 06:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:51.150 06:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.150 06:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:22:52.544 06:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:52.544 06:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.544 06:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:52.544 06:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:52.544 06:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:52.544 06:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:52.544 06:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.544 06:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.544 06:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:52.544 "name": "raid_bdev1", 00:22:52.544 "uuid": "994115f2-2da0-4627-a8e0-d3e58ecc538f", 00:22:52.544 "strip_size_kb": 64, 00:22:52.544 "state": "online", 00:22:52.544 "raid_level": "raid5f", 00:22:52.544 "superblock": false, 00:22:52.544 "num_base_bdevs": 3, 00:22:52.544 "num_base_bdevs_discovered": 3, 00:22:52.544 "num_base_bdevs_operational": 3, 00:22:52.544 "process": { 00:22:52.544 "type": "rebuild", 00:22:52.544 "target": "spare", 00:22:52.544 "progress": { 00:22:52.544 "blocks": 112640, 00:22:52.544 "percent": 85 00:22:52.544 } 00:22:52.544 }, 00:22:52.544 "base_bdevs_list": [ 00:22:52.544 { 00:22:52.544 "name": "spare", 00:22:52.544 "uuid": "8eafe587-1db8-516e-bdf0-184d1fb731a6", 00:22:52.544 "is_configured": true, 00:22:52.544 "data_offset": 0, 00:22:52.544 "data_size": 65536 00:22:52.544 }, 00:22:52.544 { 00:22:52.544 "name": "BaseBdev2", 00:22:52.544 "uuid": "7697eb4a-a2f7-5e1b-b912-605783e41234", 00:22:52.544 "is_configured": true, 00:22:52.544 "data_offset": 0, 00:22:52.544 "data_size": 65536 00:22:52.544 }, 00:22:52.544 { 00:22:52.544 "name": "BaseBdev3", 00:22:52.544 "uuid": "5ce5525b-2455-5363-97ad-0dc5d113c738", 00:22:52.544 "is_configured": true, 00:22:52.544 "data_offset": 0, 00:22:52.544 "data_size": 65536 00:22:52.544 } 00:22:52.544 ] 00:22:52.544 }' 00:22:52.544 06:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:52.544 06:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:52.544 06:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:52.544 06:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:52.544 06:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:22:53.480 [2024-08-14 06:54:20.402199] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:53.480 [2024-08-14 06:54:20.402414] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:53.480 [2024-08-14 06:54:20.402519] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.480 06:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:53.480 06:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:53.480 06:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:53.480 06:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:53.480 06:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:53.480 06:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:53.480 06:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.480 06:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.740 06:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:53.740 "name": "raid_bdev1", 00:22:53.740 "uuid": "994115f2-2da0-4627-a8e0-d3e58ecc538f", 00:22:53.740 "strip_size_kb": 64, 00:22:53.740 "state": "online", 00:22:53.740 "raid_level": "raid5f", 00:22:53.740 "superblock": false, 00:22:53.740 "num_base_bdevs": 3, 00:22:53.740 "num_base_bdevs_discovered": 3, 00:22:53.740 "num_base_bdevs_operational": 3, 00:22:53.740 "base_bdevs_list": [ 00:22:53.740 { 00:22:53.740 "name": "spare", 00:22:53.740 "uuid": "8eafe587-1db8-516e-bdf0-184d1fb731a6", 00:22:53.740 "is_configured": true, 00:22:53.740 "data_offset": 0, 00:22:53.740 "data_size": 65536 00:22:53.740 }, 00:22:53.740 { 00:22:53.740 "name": "BaseBdev2", 00:22:53.740 "uuid": "7697eb4a-a2f7-5e1b-b912-605783e41234", 00:22:53.740 "is_configured": true, 00:22:53.740 "data_offset": 0, 00:22:53.740 "data_size": 65536 00:22:53.740 }, 00:22:53.740 { 00:22:53.740 "name": "BaseBdev3", 00:22:53.740 "uuid": "5ce5525b-2455-5363-97ad-0dc5d113c738", 00:22:53.740 "is_configured": true, 00:22:53.740 "data_offset": 0, 00:22:53.740 "data_size": 65536 00:22:53.740 } 00:22:53.740 ] 00:22:53.740 }' 00:22:53.740 06:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:53.740 06:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:53.740 06:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:53.999 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:53.999 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:22:53.999 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:53.999 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:53.999 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:53.999 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:53.999 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:53.999 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.999 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:54.258 "name": "raid_bdev1", 00:22:54.258 "uuid": "994115f2-2da0-4627-a8e0-d3e58ecc538f", 00:22:54.258 "strip_size_kb": 64, 00:22:54.258 "state": "online", 00:22:54.258 "raid_level": "raid5f", 00:22:54.258 "superblock": false, 00:22:54.258 "num_base_bdevs": 3, 00:22:54.258 "num_base_bdevs_discovered": 3, 00:22:54.258 "num_base_bdevs_operational": 3, 00:22:54.258 "base_bdevs_list": [ 00:22:54.258 { 00:22:54.258 "name": "spare", 00:22:54.258 "uuid": "8eafe587-1db8-516e-bdf0-184d1fb731a6", 00:22:54.258 "is_configured": true, 00:22:54.258 "data_offset": 0, 00:22:54.258 "data_size": 65536 00:22:54.258 }, 00:22:54.258 { 00:22:54.258 "name": "BaseBdev2", 00:22:54.258 "uuid": "7697eb4a-a2f7-5e1b-b912-605783e41234", 00:22:54.258 "is_configured": true, 00:22:54.258 "data_offset": 0, 00:22:54.258 "data_size": 65536 00:22:54.258 }, 00:22:54.258 { 00:22:54.258 "name": "BaseBdev3", 00:22:54.258 "uuid": "5ce5525b-2455-5363-97ad-0dc5d113c738", 00:22:54.258 "is_configured": true, 00:22:54.258 "data_offset": 0, 00:22:54.258 "data_size": 65536 00:22:54.258 } 00:22:54.258 ] 00:22:54.258 }' 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.258 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.517 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:54.517 "name": "raid_bdev1", 00:22:54.517 "uuid": "994115f2-2da0-4627-a8e0-d3e58ecc538f", 00:22:54.517 "strip_size_kb": 64, 00:22:54.517 "state": "online", 00:22:54.517 "raid_level": "raid5f", 00:22:54.517 "superblock": false, 00:22:54.517 "num_base_bdevs": 3, 00:22:54.517 "num_base_bdevs_discovered": 3, 00:22:54.517 "num_base_bdevs_operational": 3, 00:22:54.517 "base_bdevs_list": [ 00:22:54.517 { 00:22:54.517 "name": "spare", 00:22:54.517 "uuid": "8eafe587-1db8-516e-bdf0-184d1fb731a6", 00:22:54.517 "is_configured": true, 00:22:54.517 "data_offset": 0, 00:22:54.517 "data_size": 65536 00:22:54.517 }, 00:22:54.517 { 00:22:54.517 "name": "BaseBdev2", 00:22:54.517 "uuid": "7697eb4a-a2f7-5e1b-b912-605783e41234", 00:22:54.517 "is_configured": true, 00:22:54.517 "data_offset": 0, 00:22:54.517 "data_size": 65536 00:22:54.517 }, 00:22:54.517 { 00:22:54.517 "name": "BaseBdev3", 00:22:54.517 "uuid": "5ce5525b-2455-5363-97ad-0dc5d113c738", 00:22:54.517 "is_configured": true, 00:22:54.517 "data_offset": 0, 00:22:54.517 "data_size": 65536 00:22:54.517 } 00:22:54.517 ] 00:22:54.517 }' 00:22:54.517 06:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:54.517 06:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.083 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:55.341 [2024-08-14 06:54:22.552820] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:55.341 [2024-08-14 06:54:22.552867] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:55.341 [2024-08-14 06:54:22.553028] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:55.341 [2024-08-14 06:54:22.553122] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:55.341 [2024-08-14 06:54:22.553140] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:22:55.341 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.341 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:22:55.600 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:22:55.600 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:22:55.600 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:22:55.600 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:55.600 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:55.600 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:55.600 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:55.600 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:55.600 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:55.600 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:55.600 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:55.600 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:55.600 06:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:55.858 /dev/nbd0 00:22:55.858 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:55.858 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:55.858 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:22:55.858 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:22:55.859 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:22:55.859 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:22:55.859 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:22:56.117 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:22:56.117 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:22:56.117 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:22:56.117 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:56.117 1+0 records in 00:22:56.117 1+0 records out 00:22:56.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362888 s, 11.3 MB/s 00:22:56.117 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.117 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:22:56.117 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.117 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:22:56.117 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:22:56.117 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:56.117 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:56.117 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:56.117 /dev/nbd1 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:56.376 1+0 records in 00:22:56.376 1+0 records out 00:22:56.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417421 s, 9.8 MB/s 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:56.376 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:56.640 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:56.640 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:56.640 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:56.640 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:56.640 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:56.640 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:56.640 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:56.640 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:56.640 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:56.640 06:54:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 100446 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 100446 ']' 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 100446 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100446 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:56.903 killing process with pid 100446 00:22:56.903 Received shutdown signal, test time was about 60.000000 seconds 00:22:56.903 00:22:56.903 Latency(us) 00:22:56.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.903 =================================================================================================================== 00:22:56.903 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100446' 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@965 -- # kill 100446 00:22:56.903 [2024-08-14 06:54:24.062856] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:56.903 06:54:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # wait 100446 00:22:56.903 [2024-08-14 06:54:24.106046] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:57.162 06:54:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:22:57.162 00:22:57.162 real 0m20.400s 00:22:57.162 user 0m31.158s 00:22:57.162 sys 0m2.903s 00:22:57.162 06:54:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:57.162 06:54:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.162 ************************************ 00:22:57.162 END TEST raid5f_rebuild_test 00:22:57.162 ************************************ 00:22:57.162 06:54:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:22:57.162 06:54:24 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:22:57.162 06:54:24 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:57.162 06:54:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:57.162 ************************************ 00:22:57.162 START TEST raid5f_rebuild_test_sb 00:22:57.162 ************************************ 00:22:57.162 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 3 true false true 00:22:57.162 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:22:57.162 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=3 00:22:57.162 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:22:57.162 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:22:57.162 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=100939 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 100939 /var/tmp/spdk-raid.sock 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 100939 ']' 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:57.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:57.421 06:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.421 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:57.421 Zero copy mechanism will not be used. 00:22:57.421 [2024-08-14 06:54:24.502105] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:22:57.421 [2024-08-14 06:54:24.502266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100939 ] 00:22:57.421 [2024-08-14 06:54:24.632403] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.681 [2024-08-14 06:54:24.685231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.681 [2024-08-14 06:54:24.728379] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:57.681 [2024-08-14 06:54:24.728426] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:58.249 06:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:58.249 06:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:22:58.249 06:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:58.249 06:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:58.509 BaseBdev1_malloc 00:22:58.509 06:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:58.767 [2024-08-14 06:54:25.905488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:58.767 [2024-08-14 06:54:25.905593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.767 [2024-08-14 06:54:25.905626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:22:58.767 [2024-08-14 06:54:25.905640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.767 [2024-08-14 06:54:25.908283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.767 [2024-08-14 06:54:25.908346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:58.767 BaseBdev1 00:22:58.767 06:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:58.767 06:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:59.027 BaseBdev2_malloc 00:22:59.027 06:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:59.286 [2024-08-14 06:54:26.451357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:59.286 [2024-08-14 06:54:26.451458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.286 [2024-08-14 06:54:26.451487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:59.286 [2024-08-14 06:54:26.451501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.286 [2024-08-14 06:54:26.454071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.286 [2024-08-14 06:54:26.454147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:59.286 BaseBdev2 00:22:59.286 06:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:59.286 06:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:59.545 BaseBdev3_malloc 00:22:59.545 06:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:59.804 [2024-08-14 06:54:27.043238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:59.804 [2024-08-14 06:54:27.043336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.804 [2024-08-14 06:54:27.043382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:59.804 [2024-08-14 06:54:27.043396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.804 [2024-08-14 06:54:27.045867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.804 [2024-08-14 06:54:27.045937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:59.804 BaseBdev3 00:23:00.063 06:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:00.063 spare_malloc 00:23:00.321 06:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:00.322 spare_delay 00:23:00.322 06:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:00.580 [2024-08-14 06:54:27.775651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:00.580 [2024-08-14 06:54:27.775750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.580 [2024-08-14 06:54:27.775784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:00.580 [2024-08-14 06:54:27.775801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.580 [2024-08-14 06:54:27.778386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.580 [2024-08-14 06:54:27.778432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:00.580 spare 00:23:00.580 06:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:23:00.839 [2024-08-14 06:54:28.047323] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:00.839 [2024-08-14 06:54:28.049535] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:00.839 [2024-08-14 06:54:28.049626] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:00.839 [2024-08-14 06:54:28.049841] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:23:00.840 [2024-08-14 06:54:28.049864] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:00.840 [2024-08-14 06:54:28.050276] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:23:00.840 [2024-08-14 06:54:28.050793] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:23:00.840 [2024-08-14 06:54:28.050830] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:23:00.840 [2024-08-14 06:54:28.051009] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.840 06:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:00.840 06:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:00.840 06:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:00.840 06:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:00.840 06:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:00.840 06:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:00.840 06:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:00.840 06:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:00.840 06:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:00.840 06:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:00.840 06:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.840 06:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.447 06:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:01.447 "name": "raid_bdev1", 00:23:01.447 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:01.447 "strip_size_kb": 64, 00:23:01.447 "state": "online", 00:23:01.447 "raid_level": "raid5f", 00:23:01.447 "superblock": true, 00:23:01.447 "num_base_bdevs": 3, 00:23:01.447 "num_base_bdevs_discovered": 3, 00:23:01.447 "num_base_bdevs_operational": 3, 00:23:01.447 "base_bdevs_list": [ 00:23:01.447 { 00:23:01.447 "name": "BaseBdev1", 00:23:01.447 "uuid": "da50a682-ee61-5003-8cae-b2fd903e53cf", 00:23:01.447 "is_configured": true, 00:23:01.447 "data_offset": 2048, 00:23:01.447 "data_size": 63488 00:23:01.447 }, 00:23:01.447 { 00:23:01.447 "name": "BaseBdev2", 00:23:01.447 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:01.447 "is_configured": true, 00:23:01.447 "data_offset": 2048, 00:23:01.447 "data_size": 63488 00:23:01.447 }, 00:23:01.447 { 00:23:01.447 "name": "BaseBdev3", 00:23:01.447 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:01.447 "is_configured": true, 00:23:01.447 "data_offset": 2048, 00:23:01.447 "data_size": 63488 00:23:01.447 } 00:23:01.447 ] 00:23:01.447 }' 00:23:01.447 06:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:01.447 06:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.029 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:02.029 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:23:02.029 [2024-08-14 06:54:29.255036] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:02.029 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=126976 00:23:02.029 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.288 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:02.547 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:23:02.547 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:23:02.547 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:23:02.547 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:23:02.547 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:02.547 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:02.547 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:02.547 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:02.547 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:02.547 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:02.547 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:02.547 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:02.547 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:02.547 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:02.547 [2024-08-14 06:54:29.790210] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:23:02.547 /dev/nbd0 00:23:02.806 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:02.806 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:02.806 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:23:02.806 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:23:02.806 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:23:02.806 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:23:02.806 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:23:02.806 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:23:02.806 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:23:02.806 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:23:02.807 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:02.807 1+0 records in 00:23:02.807 1+0 records out 00:23:02.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298696 s, 13.7 MB/s 00:23:02.807 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:02.807 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:23:02.807 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:02.807 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:23:02.807 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:23:02.807 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:02.807 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:02.807 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:23:02.807 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # write_unit_size=256 00:23:02.807 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # echo 128 00:23:02.807 06:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:23:03.065 496+0 records in 00:23:03.065 496+0 records out 00:23:03.065 65011712 bytes (65 MB, 62 MiB) copied, 0.393275 s, 165 MB/s 00:23:03.065 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:03.065 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:03.065 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:03.065 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:03.065 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:03.065 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:03.065 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:03.323 [2024-08-14 06:54:30.513642] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.323 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:03.323 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:03.323 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:03.323 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:03.323 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:03.323 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:03.323 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:03.323 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:03.323 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:03.582 [2024-08-14 06:54:30.741389] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:03.582 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:03.582 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:03.582 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:03.582 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:03.582 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:03.582 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:03.582 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:03.582 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:03.582 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:03.582 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:03.582 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.582 06:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.841 06:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.841 "name": "raid_bdev1", 00:23:03.841 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:03.841 "strip_size_kb": 64, 00:23:03.841 "state": "online", 00:23:03.841 "raid_level": "raid5f", 00:23:03.841 "superblock": true, 00:23:03.841 "num_base_bdevs": 3, 00:23:03.841 "num_base_bdevs_discovered": 2, 00:23:03.841 "num_base_bdevs_operational": 2, 00:23:03.841 "base_bdevs_list": [ 00:23:03.841 { 00:23:03.841 "name": null, 00:23:03.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.841 "is_configured": false, 00:23:03.841 "data_offset": 2048, 00:23:03.841 "data_size": 63488 00:23:03.841 }, 00:23:03.841 { 00:23:03.841 "name": "BaseBdev2", 00:23:03.841 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:03.841 "is_configured": true, 00:23:03.841 "data_offset": 2048, 00:23:03.841 "data_size": 63488 00:23:03.841 }, 00:23:03.841 { 00:23:03.841 "name": "BaseBdev3", 00:23:03.841 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:03.841 "is_configured": true, 00:23:03.841 "data_offset": 2048, 00:23:03.841 "data_size": 63488 00:23:03.841 } 00:23:03.841 ] 00:23:03.841 }' 00:23:03.841 06:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.841 06:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.778 06:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:04.778 [2024-08-14 06:54:31.931479] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:04.778 [2024-08-14 06:54:31.935604] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000255d0 00:23:04.778 [2024-08-14 06:54:31.938204] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:04.778 06:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:05.713 06:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:05.713 06:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:05.713 06:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:05.714 06:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:05.714 06:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:05.714 06:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.714 06:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.972 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:05.972 "name": "raid_bdev1", 00:23:05.972 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:05.972 "strip_size_kb": 64, 00:23:05.972 "state": "online", 00:23:05.972 "raid_level": "raid5f", 00:23:05.972 "superblock": true, 00:23:05.972 "num_base_bdevs": 3, 00:23:05.972 "num_base_bdevs_discovered": 3, 00:23:05.972 "num_base_bdevs_operational": 3, 00:23:05.972 "process": { 00:23:05.972 "type": "rebuild", 00:23:05.972 "target": "spare", 00:23:05.972 "progress": { 00:23:05.972 "blocks": 24576, 00:23:05.972 "percent": 19 00:23:05.972 } 00:23:05.972 }, 00:23:05.972 "base_bdevs_list": [ 00:23:05.972 { 00:23:05.972 "name": "spare", 00:23:05.972 "uuid": "f6e96077-46eb-5755-b85a-ca780c53c4b3", 00:23:05.972 "is_configured": true, 00:23:05.972 "data_offset": 2048, 00:23:05.972 "data_size": 63488 00:23:05.972 }, 00:23:05.972 { 00:23:05.972 "name": "BaseBdev2", 00:23:05.972 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:05.972 "is_configured": true, 00:23:05.972 "data_offset": 2048, 00:23:05.972 "data_size": 63488 00:23:05.972 }, 00:23:05.972 { 00:23:05.972 "name": "BaseBdev3", 00:23:05.972 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:05.972 "is_configured": true, 00:23:05.972 "data_offset": 2048, 00:23:05.972 "data_size": 63488 00:23:05.972 } 00:23:05.972 ] 00:23:05.972 }' 00:23:05.972 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:06.231 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:06.231 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:06.231 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:06.231 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:06.490 [2024-08-14 06:54:33.511255] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:06.490 [2024-08-14 06:54:33.553216] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:06.490 [2024-08-14 06:54:33.553296] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:06.490 [2024-08-14 06:54:33.553315] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:06.490 [2024-08-14 06:54:33.553323] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:06.490 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:06.490 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:06.490 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:06.490 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:06.490 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:06.490 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:06.490 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:06.490 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:06.490 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:06.490 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:06.490 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.490 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.749 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:06.749 "name": "raid_bdev1", 00:23:06.749 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:06.749 "strip_size_kb": 64, 00:23:06.749 "state": "online", 00:23:06.749 "raid_level": "raid5f", 00:23:06.749 "superblock": true, 00:23:06.749 "num_base_bdevs": 3, 00:23:06.749 "num_base_bdevs_discovered": 2, 00:23:06.749 "num_base_bdevs_operational": 2, 00:23:06.749 "base_bdevs_list": [ 00:23:06.749 { 00:23:06.749 "name": null, 00:23:06.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.749 "is_configured": false, 00:23:06.749 "data_offset": 2048, 00:23:06.749 "data_size": 63488 00:23:06.749 }, 00:23:06.749 { 00:23:06.749 "name": "BaseBdev2", 00:23:06.749 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:06.749 "is_configured": true, 00:23:06.749 "data_offset": 2048, 00:23:06.749 "data_size": 63488 00:23:06.749 }, 00:23:06.749 { 00:23:06.749 "name": "BaseBdev3", 00:23:06.749 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:06.749 "is_configured": true, 00:23:06.749 "data_offset": 2048, 00:23:06.749 "data_size": 63488 00:23:06.749 } 00:23:06.749 ] 00:23:06.749 }' 00:23:06.749 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:06.749 06:54:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.318 06:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:07.318 06:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:07.318 06:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:07.318 06:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:07.318 06:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:07.318 06:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.318 06:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.578 06:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:07.578 "name": "raid_bdev1", 00:23:07.578 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:07.578 "strip_size_kb": 64, 00:23:07.578 "state": "online", 00:23:07.578 "raid_level": "raid5f", 00:23:07.578 "superblock": true, 00:23:07.578 "num_base_bdevs": 3, 00:23:07.578 "num_base_bdevs_discovered": 2, 00:23:07.578 "num_base_bdevs_operational": 2, 00:23:07.578 "base_bdevs_list": [ 00:23:07.578 { 00:23:07.578 "name": null, 00:23:07.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.578 "is_configured": false, 00:23:07.578 "data_offset": 2048, 00:23:07.578 "data_size": 63488 00:23:07.578 }, 00:23:07.578 { 00:23:07.578 "name": "BaseBdev2", 00:23:07.578 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:07.578 "is_configured": true, 00:23:07.578 "data_offset": 2048, 00:23:07.578 "data_size": 63488 00:23:07.578 }, 00:23:07.578 { 00:23:07.578 "name": "BaseBdev3", 00:23:07.578 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:07.578 "is_configured": true, 00:23:07.578 "data_offset": 2048, 00:23:07.578 "data_size": 63488 00:23:07.578 } 00:23:07.578 ] 00:23:07.578 }' 00:23:07.578 06:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:07.578 06:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:07.578 06:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:07.578 06:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:07.578 06:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:07.838 [2024-08-14 06:54:34.889007] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:07.838 [2024-08-14 06:54:34.892815] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:23:07.838 [2024-08-14 06:54:34.895191] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:07.838 06:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:23:08.777 06:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:08.777 06:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:08.777 06:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:08.777 06:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:08.778 06:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:08.778 06:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.778 06:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:09.038 "name": "raid_bdev1", 00:23:09.038 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:09.038 "strip_size_kb": 64, 00:23:09.038 "state": "online", 00:23:09.038 "raid_level": "raid5f", 00:23:09.038 "superblock": true, 00:23:09.038 "num_base_bdevs": 3, 00:23:09.038 "num_base_bdevs_discovered": 3, 00:23:09.038 "num_base_bdevs_operational": 3, 00:23:09.038 "process": { 00:23:09.038 "type": "rebuild", 00:23:09.038 "target": "spare", 00:23:09.038 "progress": { 00:23:09.038 "blocks": 24576, 00:23:09.038 "percent": 19 00:23:09.038 } 00:23:09.038 }, 00:23:09.038 "base_bdevs_list": [ 00:23:09.038 { 00:23:09.038 "name": "spare", 00:23:09.038 "uuid": "f6e96077-46eb-5755-b85a-ca780c53c4b3", 00:23:09.038 "is_configured": true, 00:23:09.038 "data_offset": 2048, 00:23:09.038 "data_size": 63488 00:23:09.038 }, 00:23:09.038 { 00:23:09.038 "name": "BaseBdev2", 00:23:09.038 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:09.038 "is_configured": true, 00:23:09.038 "data_offset": 2048, 00:23:09.038 "data_size": 63488 00:23:09.038 }, 00:23:09.038 { 00:23:09.038 "name": "BaseBdev3", 00:23:09.038 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:09.038 "is_configured": true, 00:23:09.038 "data_offset": 2048, 00:23:09.038 "data_size": 63488 00:23:09.038 } 00:23:09.038 ] 00:23:09.038 }' 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:23:09.038 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=3 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=1035 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.038 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.298 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:09.298 "name": "raid_bdev1", 00:23:09.298 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:09.298 "strip_size_kb": 64, 00:23:09.298 "state": "online", 00:23:09.298 "raid_level": "raid5f", 00:23:09.298 "superblock": true, 00:23:09.298 "num_base_bdevs": 3, 00:23:09.298 "num_base_bdevs_discovered": 3, 00:23:09.298 "num_base_bdevs_operational": 3, 00:23:09.298 "process": { 00:23:09.298 "type": "rebuild", 00:23:09.298 "target": "spare", 00:23:09.298 "progress": { 00:23:09.298 "blocks": 30720, 00:23:09.298 "percent": 24 00:23:09.298 } 00:23:09.298 }, 00:23:09.298 "base_bdevs_list": [ 00:23:09.298 { 00:23:09.298 "name": "spare", 00:23:09.298 "uuid": "f6e96077-46eb-5755-b85a-ca780c53c4b3", 00:23:09.298 "is_configured": true, 00:23:09.298 "data_offset": 2048, 00:23:09.298 "data_size": 63488 00:23:09.298 }, 00:23:09.298 { 00:23:09.298 "name": "BaseBdev2", 00:23:09.298 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:09.298 "is_configured": true, 00:23:09.298 "data_offset": 2048, 00:23:09.298 "data_size": 63488 00:23:09.298 }, 00:23:09.298 { 00:23:09.298 "name": "BaseBdev3", 00:23:09.298 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:09.298 "is_configured": true, 00:23:09.298 "data_offset": 2048, 00:23:09.298 "data_size": 63488 00:23:09.298 } 00:23:09.298 ] 00:23:09.298 }' 00:23:09.298 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:09.298 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:09.298 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:09.556 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:09.556 06:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:10.494 06:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:10.494 06:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:10.494 06:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:10.494 06:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:10.494 06:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:10.494 06:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:10.494 06:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.494 06:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.752 06:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:10.752 "name": "raid_bdev1", 00:23:10.752 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:10.752 "strip_size_kb": 64, 00:23:10.752 "state": "online", 00:23:10.752 "raid_level": "raid5f", 00:23:10.752 "superblock": true, 00:23:10.752 "num_base_bdevs": 3, 00:23:10.752 "num_base_bdevs_discovered": 3, 00:23:10.752 "num_base_bdevs_operational": 3, 00:23:10.752 "process": { 00:23:10.752 "type": "rebuild", 00:23:10.752 "target": "spare", 00:23:10.752 "progress": { 00:23:10.752 "blocks": 57344, 00:23:10.752 "percent": 45 00:23:10.752 } 00:23:10.752 }, 00:23:10.752 "base_bdevs_list": [ 00:23:10.752 { 00:23:10.752 "name": "spare", 00:23:10.752 "uuid": "f6e96077-46eb-5755-b85a-ca780c53c4b3", 00:23:10.752 "is_configured": true, 00:23:10.752 "data_offset": 2048, 00:23:10.752 "data_size": 63488 00:23:10.752 }, 00:23:10.752 { 00:23:10.752 "name": "BaseBdev2", 00:23:10.752 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:10.752 "is_configured": true, 00:23:10.752 "data_offset": 2048, 00:23:10.752 "data_size": 63488 00:23:10.752 }, 00:23:10.752 { 00:23:10.752 "name": "BaseBdev3", 00:23:10.752 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:10.752 "is_configured": true, 00:23:10.752 "data_offset": 2048, 00:23:10.752 "data_size": 63488 00:23:10.752 } 00:23:10.752 ] 00:23:10.752 }' 00:23:10.752 06:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:10.752 06:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:10.752 06:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:10.752 06:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:10.752 06:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:11.688 06:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:11.688 06:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:11.688 06:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:11.688 06:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:11.688 06:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:11.688 06:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:11.688 06:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.688 06:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.947 06:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:11.947 "name": "raid_bdev1", 00:23:11.947 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:11.947 "strip_size_kb": 64, 00:23:11.947 "state": "online", 00:23:11.947 "raid_level": "raid5f", 00:23:11.947 "superblock": true, 00:23:11.947 "num_base_bdevs": 3, 00:23:11.947 "num_base_bdevs_discovered": 3, 00:23:11.947 "num_base_bdevs_operational": 3, 00:23:11.947 "process": { 00:23:11.947 "type": "rebuild", 00:23:11.947 "target": "spare", 00:23:11.947 "progress": { 00:23:11.947 "blocks": 83968, 00:23:11.947 "percent": 66 00:23:11.947 } 00:23:11.947 }, 00:23:11.947 "base_bdevs_list": [ 00:23:11.947 { 00:23:11.947 "name": "spare", 00:23:11.947 "uuid": "f6e96077-46eb-5755-b85a-ca780c53c4b3", 00:23:11.947 "is_configured": true, 00:23:11.947 "data_offset": 2048, 00:23:11.947 "data_size": 63488 00:23:11.947 }, 00:23:11.947 { 00:23:11.947 "name": "BaseBdev2", 00:23:11.947 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:11.947 "is_configured": true, 00:23:11.947 "data_offset": 2048, 00:23:11.947 "data_size": 63488 00:23:11.947 }, 00:23:11.947 { 00:23:11.947 "name": "BaseBdev3", 00:23:11.947 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:11.947 "is_configured": true, 00:23:11.947 "data_offset": 2048, 00:23:11.947 "data_size": 63488 00:23:11.947 } 00:23:11.947 ] 00:23:11.947 }' 00:23:11.947 06:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:11.947 06:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:12.206 06:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:12.206 06:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:12.206 06:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:13.143 06:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:13.143 06:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:13.143 06:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:13.143 06:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:13.143 06:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:13.143 06:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:13.143 06:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.143 06:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.402 06:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:13.402 "name": "raid_bdev1", 00:23:13.402 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:13.402 "strip_size_kb": 64, 00:23:13.402 "state": "online", 00:23:13.402 "raid_level": "raid5f", 00:23:13.402 "superblock": true, 00:23:13.402 "num_base_bdevs": 3, 00:23:13.402 "num_base_bdevs_discovered": 3, 00:23:13.402 "num_base_bdevs_operational": 3, 00:23:13.402 "process": { 00:23:13.402 "type": "rebuild", 00:23:13.402 "target": "spare", 00:23:13.402 "progress": { 00:23:13.402 "blocks": 112640, 00:23:13.402 "percent": 88 00:23:13.402 } 00:23:13.402 }, 00:23:13.402 "base_bdevs_list": [ 00:23:13.402 { 00:23:13.402 "name": "spare", 00:23:13.402 "uuid": "f6e96077-46eb-5755-b85a-ca780c53c4b3", 00:23:13.402 "is_configured": true, 00:23:13.402 "data_offset": 2048, 00:23:13.402 "data_size": 63488 00:23:13.402 }, 00:23:13.402 { 00:23:13.402 "name": "BaseBdev2", 00:23:13.402 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:13.402 "is_configured": true, 00:23:13.402 "data_offset": 2048, 00:23:13.402 "data_size": 63488 00:23:13.402 }, 00:23:13.402 { 00:23:13.402 "name": "BaseBdev3", 00:23:13.402 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:13.402 "is_configured": true, 00:23:13.402 "data_offset": 2048, 00:23:13.402 "data_size": 63488 00:23:13.402 } 00:23:13.402 ] 00:23:13.402 }' 00:23:13.402 06:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:13.402 06:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:13.402 06:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:13.402 06:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:13.402 06:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:13.971 [2024-08-14 06:54:41.154093] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:13.971 [2024-08-14 06:54:41.154226] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:13.971 [2024-08-14 06:54:41.154389] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.537 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:14.537 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:14.537 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:14.537 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:14.537 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:14.537 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:14.537 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.537 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.795 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:14.795 "name": "raid_bdev1", 00:23:14.795 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:14.795 "strip_size_kb": 64, 00:23:14.795 "state": "online", 00:23:14.795 "raid_level": "raid5f", 00:23:14.795 "superblock": true, 00:23:14.795 "num_base_bdevs": 3, 00:23:14.795 "num_base_bdevs_discovered": 3, 00:23:14.795 "num_base_bdevs_operational": 3, 00:23:14.795 "base_bdevs_list": [ 00:23:14.795 { 00:23:14.795 "name": "spare", 00:23:14.795 "uuid": "f6e96077-46eb-5755-b85a-ca780c53c4b3", 00:23:14.795 "is_configured": true, 00:23:14.795 "data_offset": 2048, 00:23:14.795 "data_size": 63488 00:23:14.795 }, 00:23:14.795 { 00:23:14.795 "name": "BaseBdev2", 00:23:14.795 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:14.795 "is_configured": true, 00:23:14.795 "data_offset": 2048, 00:23:14.795 "data_size": 63488 00:23:14.795 }, 00:23:14.795 { 00:23:14.795 "name": "BaseBdev3", 00:23:14.795 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:14.795 "is_configured": true, 00:23:14.795 "data_offset": 2048, 00:23:14.795 "data_size": 63488 00:23:14.795 } 00:23:14.795 ] 00:23:14.795 }' 00:23:14.795 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:14.795 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:14.795 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:14.795 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:14.795 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:23:14.795 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:14.795 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:14.795 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:14.795 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:14.795 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:14.795 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.795 06:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.053 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:15.053 "name": "raid_bdev1", 00:23:15.053 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:15.053 "strip_size_kb": 64, 00:23:15.053 "state": "online", 00:23:15.053 "raid_level": "raid5f", 00:23:15.053 "superblock": true, 00:23:15.053 "num_base_bdevs": 3, 00:23:15.053 "num_base_bdevs_discovered": 3, 00:23:15.053 "num_base_bdevs_operational": 3, 00:23:15.053 "base_bdevs_list": [ 00:23:15.053 { 00:23:15.053 "name": "spare", 00:23:15.053 "uuid": "f6e96077-46eb-5755-b85a-ca780c53c4b3", 00:23:15.053 "is_configured": true, 00:23:15.053 "data_offset": 2048, 00:23:15.053 "data_size": 63488 00:23:15.053 }, 00:23:15.053 { 00:23:15.053 "name": "BaseBdev2", 00:23:15.053 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:15.053 "is_configured": true, 00:23:15.053 "data_offset": 2048, 00:23:15.053 "data_size": 63488 00:23:15.053 }, 00:23:15.053 { 00:23:15.053 "name": "BaseBdev3", 00:23:15.053 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:15.053 "is_configured": true, 00:23:15.053 "data_offset": 2048, 00:23:15.053 "data_size": 63488 00:23:15.053 } 00:23:15.053 ] 00:23:15.053 }' 00:23:15.053 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:15.053 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:15.053 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:15.053 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:15.054 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:15.054 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:15.054 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:15.054 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:15.054 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:15.054 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:15.054 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:15.054 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:15.054 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:15.054 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:15.311 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.311 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.311 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:15.311 "name": "raid_bdev1", 00:23:15.311 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:15.311 "strip_size_kb": 64, 00:23:15.311 "state": "online", 00:23:15.311 "raid_level": "raid5f", 00:23:15.311 "superblock": true, 00:23:15.311 "num_base_bdevs": 3, 00:23:15.311 "num_base_bdevs_discovered": 3, 00:23:15.311 "num_base_bdevs_operational": 3, 00:23:15.311 "base_bdevs_list": [ 00:23:15.311 { 00:23:15.311 "name": "spare", 00:23:15.311 "uuid": "f6e96077-46eb-5755-b85a-ca780c53c4b3", 00:23:15.311 "is_configured": true, 00:23:15.311 "data_offset": 2048, 00:23:15.311 "data_size": 63488 00:23:15.311 }, 00:23:15.311 { 00:23:15.311 "name": "BaseBdev2", 00:23:15.311 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:15.311 "is_configured": true, 00:23:15.311 "data_offset": 2048, 00:23:15.311 "data_size": 63488 00:23:15.311 }, 00:23:15.311 { 00:23:15.311 "name": "BaseBdev3", 00:23:15.311 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:15.311 "is_configured": true, 00:23:15.311 "data_offset": 2048, 00:23:15.311 "data_size": 63488 00:23:15.311 } 00:23:15.311 ] 00:23:15.311 }' 00:23:15.311 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:15.569 06:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.136 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:16.136 [2024-08-14 06:54:43.352540] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:16.136 [2024-08-14 06:54:43.352582] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:16.136 [2024-08-14 06:54:43.352677] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:16.136 [2024-08-14 06:54:43.352758] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:16.136 [2024-08-14 06:54:43.352771] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:23:16.136 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.136 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:23:16.394 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:23:16.394 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:23:16.394 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:23:16.394 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:16.394 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:16.394 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:16.394 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:16.394 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:16.394 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:16.394 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:16.394 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:16.394 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:16.394 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:16.654 /dev/nbd0 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:16.654 1+0 records in 00:23:16.654 1+0 records out 00:23:16.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209864 s, 19.5 MB/s 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:16.654 06:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:16.975 /dev/nbd1 00:23:16.975 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:16.975 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:16.975 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:23:16.975 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:23:16.975 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:23:16.975 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:23:16.975 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:23:16.976 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:23:16.976 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:23:16.976 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:23:16.976 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:16.976 1+0 records in 00:23:16.976 1+0 records out 00:23:16.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465902 s, 8.8 MB/s 00:23:16.976 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:16.976 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:23:16.976 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:16.976 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:23:16.976 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:23:16.976 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:16.976 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:16.976 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:17.251 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:17.511 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:17.511 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:17.511 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:17.511 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:17.511 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:17.511 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:17.511 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:17.511 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:17.511 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:23:17.511 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:17.770 06:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:18.029 [2024-08-14 06:54:45.173600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:18.029 [2024-08-14 06:54:45.173680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.029 [2024-08-14 06:54:45.173702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:18.029 [2024-08-14 06:54:45.173714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.029 [2024-08-14 06:54:45.176005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.029 [2024-08-14 06:54:45.176047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:18.029 [2024-08-14 06:54:45.176135] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:18.029 [2024-08-14 06:54:45.176199] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:18.029 [2024-08-14 06:54:45.176343] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:18.029 [2024-08-14 06:54:45.176466] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:18.029 spare 00:23:18.029 06:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:18.029 06:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:18.029 06:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:18.029 06:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:18.029 06:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:18.029 06:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:18.029 06:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:18.029 06:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:18.029 06:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:18.029 06:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:18.029 06:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.029 06:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.029 [2024-08-14 06:54:45.276385] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:23:18.029 [2024-08-14 06:54:45.276437] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:18.029 [2024-08-14 06:54:45.276743] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043d50 00:23:18.029 [2024-08-14 06:54:45.277269] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:23:18.029 [2024-08-14 06:54:45.277313] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:23:18.029 [2024-08-14 06:54:45.277487] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.288 06:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:18.288 "name": "raid_bdev1", 00:23:18.288 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:18.288 "strip_size_kb": 64, 00:23:18.288 "state": "online", 00:23:18.288 "raid_level": "raid5f", 00:23:18.288 "superblock": true, 00:23:18.288 "num_base_bdevs": 3, 00:23:18.288 "num_base_bdevs_discovered": 3, 00:23:18.288 "num_base_bdevs_operational": 3, 00:23:18.288 "base_bdevs_list": [ 00:23:18.288 { 00:23:18.288 "name": "spare", 00:23:18.288 "uuid": "f6e96077-46eb-5755-b85a-ca780c53c4b3", 00:23:18.288 "is_configured": true, 00:23:18.288 "data_offset": 2048, 00:23:18.288 "data_size": 63488 00:23:18.288 }, 00:23:18.288 { 00:23:18.288 "name": "BaseBdev2", 00:23:18.288 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:18.288 "is_configured": true, 00:23:18.288 "data_offset": 2048, 00:23:18.288 "data_size": 63488 00:23:18.288 }, 00:23:18.288 { 00:23:18.288 "name": "BaseBdev3", 00:23:18.288 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:18.288 "is_configured": true, 00:23:18.288 "data_offset": 2048, 00:23:18.288 "data_size": 63488 00:23:18.288 } 00:23:18.288 ] 00:23:18.288 }' 00:23:18.288 06:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:18.288 06:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.856 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:18.856 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:18.856 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:18.856 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:18.856 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:18.856 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.856 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.115 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:19.115 "name": "raid_bdev1", 00:23:19.115 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:19.115 "strip_size_kb": 64, 00:23:19.115 "state": "online", 00:23:19.115 "raid_level": "raid5f", 00:23:19.115 "superblock": true, 00:23:19.115 "num_base_bdevs": 3, 00:23:19.115 "num_base_bdevs_discovered": 3, 00:23:19.115 "num_base_bdevs_operational": 3, 00:23:19.115 "base_bdevs_list": [ 00:23:19.115 { 00:23:19.115 "name": "spare", 00:23:19.115 "uuid": "f6e96077-46eb-5755-b85a-ca780c53c4b3", 00:23:19.115 "is_configured": true, 00:23:19.115 "data_offset": 2048, 00:23:19.115 "data_size": 63488 00:23:19.115 }, 00:23:19.115 { 00:23:19.115 "name": "BaseBdev2", 00:23:19.115 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:19.115 "is_configured": true, 00:23:19.115 "data_offset": 2048, 00:23:19.115 "data_size": 63488 00:23:19.115 }, 00:23:19.115 { 00:23:19.115 "name": "BaseBdev3", 00:23:19.115 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:19.115 "is_configured": true, 00:23:19.115 "data_offset": 2048, 00:23:19.115 "data_size": 63488 00:23:19.115 } 00:23:19.115 ] 00:23:19.115 }' 00:23:19.115 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:19.115 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:19.115 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:19.374 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:19.374 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.374 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:19.374 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:23:19.374 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:19.634 [2024-08-14 06:54:46.850941] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:19.634 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:19.634 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:19.634 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:19.634 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:19.634 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:19.634 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:19.634 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.634 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.634 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.634 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.634 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.634 06:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.894 06:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:19.894 "name": "raid_bdev1", 00:23:19.894 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:19.894 "strip_size_kb": 64, 00:23:19.894 "state": "online", 00:23:19.894 "raid_level": "raid5f", 00:23:19.894 "superblock": true, 00:23:19.894 "num_base_bdevs": 3, 00:23:19.894 "num_base_bdevs_discovered": 2, 00:23:19.894 "num_base_bdevs_operational": 2, 00:23:19.894 "base_bdevs_list": [ 00:23:19.894 { 00:23:19.894 "name": null, 00:23:19.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.894 "is_configured": false, 00:23:19.894 "data_offset": 2048, 00:23:19.894 "data_size": 63488 00:23:19.894 }, 00:23:19.894 { 00:23:19.894 "name": "BaseBdev2", 00:23:19.894 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:19.894 "is_configured": true, 00:23:19.894 "data_offset": 2048, 00:23:19.894 "data_size": 63488 00:23:19.894 }, 00:23:19.894 { 00:23:19.894 "name": "BaseBdev3", 00:23:19.894 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:19.894 "is_configured": true, 00:23:19.894 "data_offset": 2048, 00:23:19.894 "data_size": 63488 00:23:19.894 } 00:23:19.894 ] 00:23:19.894 }' 00:23:19.894 06:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:19.894 06:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.831 06:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:20.831 [2024-08-14 06:54:47.993297] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:20.831 [2024-08-14 06:54:47.993522] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:20.831 [2024-08-14 06:54:47.993553] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:20.831 [2024-08-14 06:54:47.993605] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:20.831 [2024-08-14 06:54:47.997566] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043e20 00:23:20.831 [2024-08-14 06:54:48.000115] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:20.831 06:54:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:23:21.768 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:21.768 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:21.768 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:21.768 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:21.768 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:22.027 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.027 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.286 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:22.286 "name": "raid_bdev1", 00:23:22.286 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:22.286 "strip_size_kb": 64, 00:23:22.286 "state": "online", 00:23:22.286 "raid_level": "raid5f", 00:23:22.286 "superblock": true, 00:23:22.286 "num_base_bdevs": 3, 00:23:22.286 "num_base_bdevs_discovered": 3, 00:23:22.286 "num_base_bdevs_operational": 3, 00:23:22.286 "process": { 00:23:22.286 "type": "rebuild", 00:23:22.286 "target": "spare", 00:23:22.286 "progress": { 00:23:22.286 "blocks": 24576, 00:23:22.286 "percent": 19 00:23:22.286 } 00:23:22.286 }, 00:23:22.286 "base_bdevs_list": [ 00:23:22.286 { 00:23:22.286 "name": "spare", 00:23:22.286 "uuid": "f6e96077-46eb-5755-b85a-ca780c53c4b3", 00:23:22.286 "is_configured": true, 00:23:22.286 "data_offset": 2048, 00:23:22.286 "data_size": 63488 00:23:22.286 }, 00:23:22.286 { 00:23:22.286 "name": "BaseBdev2", 00:23:22.286 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:22.286 "is_configured": true, 00:23:22.286 "data_offset": 2048, 00:23:22.286 "data_size": 63488 00:23:22.286 }, 00:23:22.286 { 00:23:22.286 "name": "BaseBdev3", 00:23:22.286 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:22.286 "is_configured": true, 00:23:22.286 "data_offset": 2048, 00:23:22.286 "data_size": 63488 00:23:22.286 } 00:23:22.286 ] 00:23:22.286 }' 00:23:22.286 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:22.286 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:22.286 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:22.286 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:22.286 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:22.545 [2024-08-14 06:54:49.758257] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:22.805 [2024-08-14 06:54:49.818358] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:22.805 [2024-08-14 06:54:49.818463] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.805 [2024-08-14 06:54:49.818487] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:22.805 [2024-08-14 06:54:49.818499] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:22.805 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:22.805 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:22.805 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:22.805 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:22.805 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:22.805 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:22.805 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.805 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.805 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.805 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.805 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.805 06:54:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.065 06:54:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:23.065 "name": "raid_bdev1", 00:23:23.065 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:23.065 "strip_size_kb": 64, 00:23:23.065 "state": "online", 00:23:23.065 "raid_level": "raid5f", 00:23:23.065 "superblock": true, 00:23:23.065 "num_base_bdevs": 3, 00:23:23.065 "num_base_bdevs_discovered": 2, 00:23:23.065 "num_base_bdevs_operational": 2, 00:23:23.065 "base_bdevs_list": [ 00:23:23.065 { 00:23:23.065 "name": null, 00:23:23.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.065 "is_configured": false, 00:23:23.065 "data_offset": 2048, 00:23:23.065 "data_size": 63488 00:23:23.065 }, 00:23:23.065 { 00:23:23.065 "name": "BaseBdev2", 00:23:23.065 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:23.065 "is_configured": true, 00:23:23.065 "data_offset": 2048, 00:23:23.065 "data_size": 63488 00:23:23.065 }, 00:23:23.065 { 00:23:23.065 "name": "BaseBdev3", 00:23:23.065 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:23.065 "is_configured": true, 00:23:23.065 "data_offset": 2048, 00:23:23.065 "data_size": 63488 00:23:23.065 } 00:23:23.065 ] 00:23:23.065 }' 00:23:23.065 06:54:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:23.065 06:54:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.633 06:54:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:23.894 [2024-08-14 06:54:51.050453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:23.894 [2024-08-14 06:54:51.050557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.894 [2024-08-14 06:54:51.050582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:23.894 [2024-08-14 06:54:51.050595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.894 [2024-08-14 06:54:51.051081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.894 [2024-08-14 06:54:51.051114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:23.894 [2024-08-14 06:54:51.051232] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:23.894 [2024-08-14 06:54:51.051257] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:23.894 [2024-08-14 06:54:51.051275] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:23.894 [2024-08-14 06:54:51.051313] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:23.894 [2024-08-14 06:54:51.055206] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043ef0 00:23:23.894 spare 00:23:23.894 [2024-08-14 06:54:51.057762] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:23.894 06:54:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:23:25.272 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:25.272 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:25.272 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:25.272 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:25.272 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:25.272 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.272 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.272 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:25.272 "name": "raid_bdev1", 00:23:25.272 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:25.272 "strip_size_kb": 64, 00:23:25.272 "state": "online", 00:23:25.272 "raid_level": "raid5f", 00:23:25.272 "superblock": true, 00:23:25.272 "num_base_bdevs": 3, 00:23:25.272 "num_base_bdevs_discovered": 3, 00:23:25.272 "num_base_bdevs_operational": 3, 00:23:25.272 "process": { 00:23:25.272 "type": "rebuild", 00:23:25.272 "target": "spare", 00:23:25.272 "progress": { 00:23:25.272 "blocks": 26624, 00:23:25.272 "percent": 20 00:23:25.272 } 00:23:25.272 }, 00:23:25.272 "base_bdevs_list": [ 00:23:25.272 { 00:23:25.272 "name": "spare", 00:23:25.272 "uuid": "f6e96077-46eb-5755-b85a-ca780c53c4b3", 00:23:25.272 "is_configured": true, 00:23:25.272 "data_offset": 2048, 00:23:25.272 "data_size": 63488 00:23:25.272 }, 00:23:25.272 { 00:23:25.272 "name": "BaseBdev2", 00:23:25.272 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:25.272 "is_configured": true, 00:23:25.272 "data_offset": 2048, 00:23:25.272 "data_size": 63488 00:23:25.272 }, 00:23:25.272 { 00:23:25.272 "name": "BaseBdev3", 00:23:25.272 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:25.272 "is_configured": true, 00:23:25.272 "data_offset": 2048, 00:23:25.272 "data_size": 63488 00:23:25.272 } 00:23:25.272 ] 00:23:25.272 }' 00:23:25.272 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:25.272 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:25.273 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:25.273 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:25.273 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:25.840 [2024-08-14 06:54:52.798908] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:25.840 [2024-08-14 06:54:52.876117] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:25.840 [2024-08-14 06:54:52.876238] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:25.840 [2024-08-14 06:54:52.876267] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:25.840 [2024-08-14 06:54:52.876277] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:25.840 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:25.840 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:25.840 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:25.840 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:25.840 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:25.840 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:25.840 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:25.840 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:25.840 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:25.840 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:25.840 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.840 06:54:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.099 06:54:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:26.099 "name": "raid_bdev1", 00:23:26.099 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:26.099 "strip_size_kb": 64, 00:23:26.099 "state": "online", 00:23:26.099 "raid_level": "raid5f", 00:23:26.099 "superblock": true, 00:23:26.099 "num_base_bdevs": 3, 00:23:26.099 "num_base_bdevs_discovered": 2, 00:23:26.099 "num_base_bdevs_operational": 2, 00:23:26.099 "base_bdevs_list": [ 00:23:26.099 { 00:23:26.099 "name": null, 00:23:26.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.099 "is_configured": false, 00:23:26.099 "data_offset": 2048, 00:23:26.099 "data_size": 63488 00:23:26.099 }, 00:23:26.099 { 00:23:26.099 "name": "BaseBdev2", 00:23:26.099 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:26.099 "is_configured": true, 00:23:26.099 "data_offset": 2048, 00:23:26.099 "data_size": 63488 00:23:26.099 }, 00:23:26.099 { 00:23:26.099 "name": "BaseBdev3", 00:23:26.099 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:26.099 "is_configured": true, 00:23:26.099 "data_offset": 2048, 00:23:26.099 "data_size": 63488 00:23:26.099 } 00:23:26.099 ] 00:23:26.099 }' 00:23:26.099 06:54:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:26.099 06:54:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.667 06:54:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:26.667 06:54:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:26.667 06:54:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:26.667 06:54:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:26.667 06:54:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:26.667 06:54:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.667 06:54:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.237 06:54:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:27.237 "name": "raid_bdev1", 00:23:27.237 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:27.237 "strip_size_kb": 64, 00:23:27.237 "state": "online", 00:23:27.237 "raid_level": "raid5f", 00:23:27.237 "superblock": true, 00:23:27.237 "num_base_bdevs": 3, 00:23:27.237 "num_base_bdevs_discovered": 2, 00:23:27.237 "num_base_bdevs_operational": 2, 00:23:27.237 "base_bdevs_list": [ 00:23:27.237 { 00:23:27.237 "name": null, 00:23:27.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.237 "is_configured": false, 00:23:27.237 "data_offset": 2048, 00:23:27.237 "data_size": 63488 00:23:27.237 }, 00:23:27.237 { 00:23:27.237 "name": "BaseBdev2", 00:23:27.237 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:27.237 "is_configured": true, 00:23:27.237 "data_offset": 2048, 00:23:27.237 "data_size": 63488 00:23:27.237 }, 00:23:27.237 { 00:23:27.237 "name": "BaseBdev3", 00:23:27.237 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:27.237 "is_configured": true, 00:23:27.237 "data_offset": 2048, 00:23:27.237 "data_size": 63488 00:23:27.237 } 00:23:27.237 ] 00:23:27.237 }' 00:23:27.237 06:54:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:27.237 06:54:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:27.237 06:54:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:27.237 06:54:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:27.237 06:54:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:27.509 06:54:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:27.802 [2024-08-14 06:54:54.823448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:27.802 [2024-08-14 06:54:54.823545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.802 [2024-08-14 06:54:54.823579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:27.802 [2024-08-14 06:54:54.823593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.802 [2024-08-14 06:54:54.824073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.802 [2024-08-14 06:54:54.824110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:27.802 [2024-08-14 06:54:54.824230] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:27.802 [2024-08-14 06:54:54.824253] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:27.802 [2024-08-14 06:54:54.824267] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:27.802 BaseBdev1 00:23:27.802 06:54:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:23:28.740 06:54:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:28.740 06:54:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:28.740 06:54:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:28.740 06:54:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:28.740 06:54:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:28.740 06:54:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:28.740 06:54:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:28.740 06:54:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:28.740 06:54:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:28.740 06:54:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:28.740 06:54:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.740 06:54:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.307 06:54:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:29.307 "name": "raid_bdev1", 00:23:29.307 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:29.307 "strip_size_kb": 64, 00:23:29.307 "state": "online", 00:23:29.307 "raid_level": "raid5f", 00:23:29.307 "superblock": true, 00:23:29.307 "num_base_bdevs": 3, 00:23:29.307 "num_base_bdevs_discovered": 2, 00:23:29.307 "num_base_bdevs_operational": 2, 00:23:29.307 "base_bdevs_list": [ 00:23:29.307 { 00:23:29.307 "name": null, 00:23:29.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.307 "is_configured": false, 00:23:29.307 "data_offset": 2048, 00:23:29.307 "data_size": 63488 00:23:29.307 }, 00:23:29.307 { 00:23:29.307 "name": "BaseBdev2", 00:23:29.307 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:29.307 "is_configured": true, 00:23:29.307 "data_offset": 2048, 00:23:29.307 "data_size": 63488 00:23:29.307 }, 00:23:29.307 { 00:23:29.307 "name": "BaseBdev3", 00:23:29.307 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:29.307 "is_configured": true, 00:23:29.307 "data_offset": 2048, 00:23:29.307 "data_size": 63488 00:23:29.307 } 00:23:29.307 ] 00:23:29.307 }' 00:23:29.307 06:54:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:29.307 06:54:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.873 06:54:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:29.873 06:54:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:29.873 06:54:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:29.873 06:54:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:29.873 06:54:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:29.873 06:54:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.873 06:54:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:30.131 "name": "raid_bdev1", 00:23:30.131 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:30.131 "strip_size_kb": 64, 00:23:30.131 "state": "online", 00:23:30.131 "raid_level": "raid5f", 00:23:30.131 "superblock": true, 00:23:30.131 "num_base_bdevs": 3, 00:23:30.131 "num_base_bdevs_discovered": 2, 00:23:30.131 "num_base_bdevs_operational": 2, 00:23:30.131 "base_bdevs_list": [ 00:23:30.131 { 00:23:30.131 "name": null, 00:23:30.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.131 "is_configured": false, 00:23:30.131 "data_offset": 2048, 00:23:30.131 "data_size": 63488 00:23:30.131 }, 00:23:30.131 { 00:23:30.131 "name": "BaseBdev2", 00:23:30.131 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:30.131 "is_configured": true, 00:23:30.131 "data_offset": 2048, 00:23:30.131 "data_size": 63488 00:23:30.131 }, 00:23:30.131 { 00:23:30.131 "name": "BaseBdev3", 00:23:30.131 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:30.131 "is_configured": true, 00:23:30.131 "data_offset": 2048, 00:23:30.131 "data_size": 63488 00:23:30.131 } 00:23:30.131 ] 00:23:30.131 }' 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@646 -- # local es=0 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:30.131 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:30.389 [2024-08-14 06:54:57.611618] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:30.389 [2024-08-14 06:54:57.611831] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:30.389 [2024-08-14 06:54:57.611847] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:30.389 request: 00:23:30.389 { 00:23:30.389 "base_bdev": "BaseBdev1", 00:23:30.389 "raid_bdev": "raid_bdev1", 00:23:30.389 "method": "bdev_raid_add_base_bdev", 00:23:30.389 "req_id": 1 00:23:30.389 } 00:23:30.389 Got JSON-RPC error response 00:23:30.389 response: 00:23:30.389 { 00:23:30.389 "code": -22, 00:23:30.389 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:30.389 } 00:23:30.647 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@649 -- # es=1 00:23:30.647 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:23:30.647 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:23:30.647 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:23:30.647 06:54:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:23:31.584 06:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:31.584 06:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:31.584 06:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:31.584 06:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:31.584 06:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:31.584 06:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:31.584 06:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:31.584 06:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:31.584 06:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:31.584 06:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:31.584 06:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.584 06:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.843 06:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.843 "name": "raid_bdev1", 00:23:31.843 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:31.843 "strip_size_kb": 64, 00:23:31.843 "state": "online", 00:23:31.843 "raid_level": "raid5f", 00:23:31.843 "superblock": true, 00:23:31.843 "num_base_bdevs": 3, 00:23:31.843 "num_base_bdevs_discovered": 2, 00:23:31.843 "num_base_bdevs_operational": 2, 00:23:31.843 "base_bdevs_list": [ 00:23:31.843 { 00:23:31.843 "name": null, 00:23:31.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.843 "is_configured": false, 00:23:31.843 "data_offset": 2048, 00:23:31.843 "data_size": 63488 00:23:31.843 }, 00:23:31.843 { 00:23:31.843 "name": "BaseBdev2", 00:23:31.843 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:31.843 "is_configured": true, 00:23:31.843 "data_offset": 2048, 00:23:31.843 "data_size": 63488 00:23:31.843 }, 00:23:31.843 { 00:23:31.843 "name": "BaseBdev3", 00:23:31.843 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:31.843 "is_configured": true, 00:23:31.843 "data_offset": 2048, 00:23:31.843 "data_size": 63488 00:23:31.843 } 00:23:31.843 ] 00:23:31.843 }' 00:23:31.843 06:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.843 06:54:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.452 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:32.452 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:32.452 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:32.452 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:32.452 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:32.452 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.452 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.732 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:32.732 "name": "raid_bdev1", 00:23:32.732 "uuid": "d95db5ea-59c4-4200-8f46-b33ea56c0c80", 00:23:32.732 "strip_size_kb": 64, 00:23:32.732 "state": "online", 00:23:32.732 "raid_level": "raid5f", 00:23:32.732 "superblock": true, 00:23:32.732 "num_base_bdevs": 3, 00:23:32.732 "num_base_bdevs_discovered": 2, 00:23:32.732 "num_base_bdevs_operational": 2, 00:23:32.732 "base_bdevs_list": [ 00:23:32.732 { 00:23:32.732 "name": null, 00:23:32.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.732 "is_configured": false, 00:23:32.732 "data_offset": 2048, 00:23:32.732 "data_size": 63488 00:23:32.732 }, 00:23:32.732 { 00:23:32.732 "name": "BaseBdev2", 00:23:32.732 "uuid": "f19d61d2-cdbe-55e5-a966-c74b8a68ea71", 00:23:32.732 "is_configured": true, 00:23:32.732 "data_offset": 2048, 00:23:32.732 "data_size": 63488 00:23:32.732 }, 00:23:32.732 { 00:23:32.732 "name": "BaseBdev3", 00:23:32.732 "uuid": "3eff916e-2d2a-5286-8c28-ae82121f988c", 00:23:32.732 "is_configured": true, 00:23:32.732 "data_offset": 2048, 00:23:32.732 "data_size": 63488 00:23:32.732 } 00:23:32.732 ] 00:23:32.732 }' 00:23:32.732 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:32.732 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:32.732 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:32.732 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:32.733 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 100939 00:23:32.733 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 100939 ']' 00:23:32.733 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 100939 00:23:32.733 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:23:32.733 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:32.733 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100939 00:23:32.733 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:32.733 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:32.733 killing process with pid 100939 00:23:32.733 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100939' 00:23:32.733 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 100939 00:23:32.733 Received shutdown signal, test time was about 60.000000 seconds 00:23:32.733 00:23:32.733 Latency(us) 00:23:32.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.733 =================================================================================================================== 00:23:32.733 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:32.733 [2024-08-14 06:54:59.888649] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:32.733 06:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 100939 00:23:32.733 [2024-08-14 06:54:59.888789] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:32.733 [2024-08-14 06:54:59.888874] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:32.733 [2024-08-14 06:54:59.888888] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:23:32.733 [2024-08-14 06:54:59.933104] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:32.993 06:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:23:32.993 00:23:32.993 real 0m35.751s 00:23:32.993 user 0m56.773s 00:23:32.993 sys 0m4.331s 00:23:32.993 06:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:32.993 06:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.993 ************************************ 00:23:32.993 END TEST raid5f_rebuild_test_sb 00:23:32.993 ************************************ 00:23:32.993 06:55:00 bdev_raid -- bdev/bdev_raid.sh@964 -- # for n in {3..4} 00:23:32.993 06:55:00 bdev_raid -- bdev/bdev_raid.sh@965 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:23:32.993 06:55:00 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:23:32.993 06:55:00 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:32.993 06:55:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:32.993 ************************************ 00:23:32.993 START TEST raid5f_state_function_test 00:23:32.993 ************************************ 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 4 false 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=101823 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 101823' 00:23:32.993 Process raid pid: 101823 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 101823 /var/tmp/spdk-raid.sock 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 101823 ']' 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:32.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:32.993 06:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.251 [2024-08-14 06:55:00.296197] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:23:33.251 [2024-08-14 06:55:00.296346] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.251 [2024-08-14 06:55:00.429009] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.251 [2024-08-14 06:55:00.483902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.509 [2024-08-14 06:55:00.529467] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:33.509 [2024-08-14 06:55:00.529517] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:34.077 06:55:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:34.077 06:55:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:23:34.077 06:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:34.335 [2024-08-14 06:55:01.470607] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:34.335 [2024-08-14 06:55:01.470681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:34.335 [2024-08-14 06:55:01.470697] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:34.335 [2024-08-14 06:55:01.470706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:34.335 [2024-08-14 06:55:01.470717] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:34.335 [2024-08-14 06:55:01.470725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:34.335 [2024-08-14 06:55:01.470736] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:34.335 [2024-08-14 06:55:01.470743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:34.335 06:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:34.335 06:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:34.335 06:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:34.335 06:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:34.335 06:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:34.335 06:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:34.335 06:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:34.335 06:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:34.335 06:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:34.335 06:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:34.335 06:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.335 06:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:34.593 06:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:34.593 "name": "Existed_Raid", 00:23:34.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.593 "strip_size_kb": 64, 00:23:34.593 "state": "configuring", 00:23:34.593 "raid_level": "raid5f", 00:23:34.593 "superblock": false, 00:23:34.593 "num_base_bdevs": 4, 00:23:34.593 "num_base_bdevs_discovered": 0, 00:23:34.593 "num_base_bdevs_operational": 4, 00:23:34.593 "base_bdevs_list": [ 00:23:34.593 { 00:23:34.593 "name": "BaseBdev1", 00:23:34.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.593 "is_configured": false, 00:23:34.593 "data_offset": 0, 00:23:34.593 "data_size": 0 00:23:34.593 }, 00:23:34.593 { 00:23:34.593 "name": "BaseBdev2", 00:23:34.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.593 "is_configured": false, 00:23:34.593 "data_offset": 0, 00:23:34.593 "data_size": 0 00:23:34.593 }, 00:23:34.593 { 00:23:34.593 "name": "BaseBdev3", 00:23:34.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.593 "is_configured": false, 00:23:34.593 "data_offset": 0, 00:23:34.593 "data_size": 0 00:23:34.593 }, 00:23:34.593 { 00:23:34.593 "name": "BaseBdev4", 00:23:34.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.593 "is_configured": false, 00:23:34.593 "data_offset": 0, 00:23:34.593 "data_size": 0 00:23:34.593 } 00:23:34.593 ] 00:23:34.593 }' 00:23:34.593 06:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:34.593 06:55:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.160 06:55:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:35.419 [2024-08-14 06:55:02.517026] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:35.419 [2024-08-14 06:55:02.517080] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:23:35.419 06:55:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:35.678 [2024-08-14 06:55:02.728698] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:35.678 [2024-08-14 06:55:02.728765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:35.678 [2024-08-14 06:55:02.728778] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:35.678 [2024-08-14 06:55:02.728786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:35.678 [2024-08-14 06:55:02.728795] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:35.678 [2024-08-14 06:55:02.728803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:35.678 [2024-08-14 06:55:02.728813] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:35.678 [2024-08-14 06:55:02.728821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:35.678 06:55:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:35.935 [2024-08-14 06:55:02.977568] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:35.935 BaseBdev1 00:23:35.935 06:55:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:35.935 06:55:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:23:35.935 06:55:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:35.935 06:55:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:35.935 06:55:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:35.935 06:55:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:35.935 06:55:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:36.193 06:55:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:36.452 [ 00:23:36.452 { 00:23:36.452 "name": "BaseBdev1", 00:23:36.452 "aliases": [ 00:23:36.452 "9623fa48-6db4-4336-982d-de172ef1be47" 00:23:36.452 ], 00:23:36.452 "product_name": "Malloc disk", 00:23:36.452 "block_size": 512, 00:23:36.452 "num_blocks": 65536, 00:23:36.452 "uuid": "9623fa48-6db4-4336-982d-de172ef1be47", 00:23:36.452 "assigned_rate_limits": { 00:23:36.452 "rw_ios_per_sec": 0, 00:23:36.452 "rw_mbytes_per_sec": 0, 00:23:36.452 "r_mbytes_per_sec": 0, 00:23:36.452 "w_mbytes_per_sec": 0 00:23:36.452 }, 00:23:36.452 "claimed": true, 00:23:36.452 "claim_type": "exclusive_write", 00:23:36.452 "zoned": false, 00:23:36.452 "supported_io_types": { 00:23:36.452 "read": true, 00:23:36.452 "write": true, 00:23:36.452 "unmap": true, 00:23:36.452 "flush": true, 00:23:36.452 "reset": true, 00:23:36.452 "nvme_admin": false, 00:23:36.452 "nvme_io": false, 00:23:36.452 "nvme_io_md": false, 00:23:36.452 "write_zeroes": true, 00:23:36.452 "zcopy": true, 00:23:36.452 "get_zone_info": false, 00:23:36.452 "zone_management": false, 00:23:36.452 "zone_append": false, 00:23:36.452 "compare": false, 00:23:36.452 "compare_and_write": false, 00:23:36.452 "abort": true, 00:23:36.452 "seek_hole": false, 00:23:36.452 "seek_data": false, 00:23:36.452 "copy": true, 00:23:36.452 "nvme_iov_md": false 00:23:36.452 }, 00:23:36.452 "memory_domains": [ 00:23:36.453 { 00:23:36.453 "dma_device_id": "system", 00:23:36.453 "dma_device_type": 1 00:23:36.453 }, 00:23:36.453 { 00:23:36.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:36.453 "dma_device_type": 2 00:23:36.453 } 00:23:36.453 ], 00:23:36.453 "driver_specific": {} 00:23:36.453 } 00:23:36.453 ] 00:23:36.453 06:55:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:36.453 06:55:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:36.453 06:55:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:36.453 06:55:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:36.453 06:55:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:36.453 06:55:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:36.453 06:55:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:36.453 06:55:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.453 06:55:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.453 06:55:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.453 06:55:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.453 06:55:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:36.453 06:55:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.712 06:55:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:36.712 "name": "Existed_Raid", 00:23:36.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.712 "strip_size_kb": 64, 00:23:36.712 "state": "configuring", 00:23:36.712 "raid_level": "raid5f", 00:23:36.712 "superblock": false, 00:23:36.712 "num_base_bdevs": 4, 00:23:36.712 "num_base_bdevs_discovered": 1, 00:23:36.712 "num_base_bdevs_operational": 4, 00:23:36.712 "base_bdevs_list": [ 00:23:36.712 { 00:23:36.712 "name": "BaseBdev1", 00:23:36.712 "uuid": "9623fa48-6db4-4336-982d-de172ef1be47", 00:23:36.712 "is_configured": true, 00:23:36.712 "data_offset": 0, 00:23:36.712 "data_size": 65536 00:23:36.712 }, 00:23:36.712 { 00:23:36.712 "name": "BaseBdev2", 00:23:36.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.712 "is_configured": false, 00:23:36.712 "data_offset": 0, 00:23:36.712 "data_size": 0 00:23:36.712 }, 00:23:36.712 { 00:23:36.712 "name": "BaseBdev3", 00:23:36.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.712 "is_configured": false, 00:23:36.712 "data_offset": 0, 00:23:36.712 "data_size": 0 00:23:36.712 }, 00:23:36.712 { 00:23:36.712 "name": "BaseBdev4", 00:23:36.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.712 "is_configured": false, 00:23:36.712 "data_offset": 0, 00:23:36.712 "data_size": 0 00:23:36.712 } 00:23:36.712 ] 00:23:36.712 }' 00:23:36.712 06:55:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:36.712 06:55:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.306 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:37.306 [2024-08-14 06:55:04.511054] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:37.306 [2024-08-14 06:55:04.511135] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:23:37.306 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:37.565 [2024-08-14 06:55:04.758719] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:37.565 [2024-08-14 06:55:04.760732] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:37.565 [2024-08-14 06:55:04.760778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:37.565 [2024-08-14 06:55:04.760794] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:37.565 [2024-08-14 06:55:04.760804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:37.565 [2024-08-14 06:55:04.760813] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:37.565 [2024-08-14 06:55:04.760821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:37.565 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:37.565 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:37.565 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:37.565 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:37.566 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:37.566 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:37.566 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:37.566 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:37.566 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:37.566 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:37.566 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:37.566 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:37.566 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.566 06:55:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:37.824 06:55:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:37.824 "name": "Existed_Raid", 00:23:37.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.824 "strip_size_kb": 64, 00:23:37.824 "state": "configuring", 00:23:37.824 "raid_level": "raid5f", 00:23:37.824 "superblock": false, 00:23:37.824 "num_base_bdevs": 4, 00:23:37.824 "num_base_bdevs_discovered": 1, 00:23:37.824 "num_base_bdevs_operational": 4, 00:23:37.824 "base_bdevs_list": [ 00:23:37.824 { 00:23:37.824 "name": "BaseBdev1", 00:23:37.824 "uuid": "9623fa48-6db4-4336-982d-de172ef1be47", 00:23:37.824 "is_configured": true, 00:23:37.824 "data_offset": 0, 00:23:37.824 "data_size": 65536 00:23:37.824 }, 00:23:37.824 { 00:23:37.824 "name": "BaseBdev2", 00:23:37.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.824 "is_configured": false, 00:23:37.824 "data_offset": 0, 00:23:37.824 "data_size": 0 00:23:37.824 }, 00:23:37.824 { 00:23:37.824 "name": "BaseBdev3", 00:23:37.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.824 "is_configured": false, 00:23:37.824 "data_offset": 0, 00:23:37.824 "data_size": 0 00:23:37.824 }, 00:23:37.824 { 00:23:37.824 "name": "BaseBdev4", 00:23:37.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.824 "is_configured": false, 00:23:37.824 "data_offset": 0, 00:23:37.824 "data_size": 0 00:23:37.824 } 00:23:37.824 ] 00:23:37.824 }' 00:23:37.824 06:55:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:37.824 06:55:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.390 06:55:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:38.649 [2024-08-14 06:55:05.847776] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:38.649 BaseBdev2 00:23:38.649 06:55:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:38.649 06:55:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:23:38.649 06:55:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:38.649 06:55:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:38.649 06:55:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:38.649 06:55:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:38.649 06:55:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:38.907 06:55:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:39.166 [ 00:23:39.166 { 00:23:39.166 "name": "BaseBdev2", 00:23:39.166 "aliases": [ 00:23:39.166 "f9680a1e-2d05-404a-a775-427999754b78" 00:23:39.166 ], 00:23:39.166 "product_name": "Malloc disk", 00:23:39.166 "block_size": 512, 00:23:39.166 "num_blocks": 65536, 00:23:39.166 "uuid": "f9680a1e-2d05-404a-a775-427999754b78", 00:23:39.166 "assigned_rate_limits": { 00:23:39.166 "rw_ios_per_sec": 0, 00:23:39.166 "rw_mbytes_per_sec": 0, 00:23:39.166 "r_mbytes_per_sec": 0, 00:23:39.166 "w_mbytes_per_sec": 0 00:23:39.166 }, 00:23:39.166 "claimed": true, 00:23:39.166 "claim_type": "exclusive_write", 00:23:39.166 "zoned": false, 00:23:39.166 "supported_io_types": { 00:23:39.166 "read": true, 00:23:39.166 "write": true, 00:23:39.166 "unmap": true, 00:23:39.166 "flush": true, 00:23:39.166 "reset": true, 00:23:39.166 "nvme_admin": false, 00:23:39.166 "nvme_io": false, 00:23:39.166 "nvme_io_md": false, 00:23:39.166 "write_zeroes": true, 00:23:39.166 "zcopy": true, 00:23:39.166 "get_zone_info": false, 00:23:39.166 "zone_management": false, 00:23:39.166 "zone_append": false, 00:23:39.166 "compare": false, 00:23:39.166 "compare_and_write": false, 00:23:39.166 "abort": true, 00:23:39.166 "seek_hole": false, 00:23:39.166 "seek_data": false, 00:23:39.166 "copy": true, 00:23:39.166 "nvme_iov_md": false 00:23:39.166 }, 00:23:39.166 "memory_domains": [ 00:23:39.166 { 00:23:39.166 "dma_device_id": "system", 00:23:39.166 "dma_device_type": 1 00:23:39.166 }, 00:23:39.166 { 00:23:39.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.166 "dma_device_type": 2 00:23:39.166 } 00:23:39.166 ], 00:23:39.166 "driver_specific": {} 00:23:39.166 } 00:23:39.166 ] 00:23:39.166 06:55:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:39.166 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:39.166 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:39.166 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:39.166 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:39.166 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:39.166 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:39.166 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:39.166 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:39.166 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:39.166 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:39.166 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:39.166 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:39.166 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.166 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:39.425 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:39.425 "name": "Existed_Raid", 00:23:39.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.425 "strip_size_kb": 64, 00:23:39.425 "state": "configuring", 00:23:39.425 "raid_level": "raid5f", 00:23:39.425 "superblock": false, 00:23:39.425 "num_base_bdevs": 4, 00:23:39.425 "num_base_bdevs_discovered": 2, 00:23:39.425 "num_base_bdevs_operational": 4, 00:23:39.425 "base_bdevs_list": [ 00:23:39.425 { 00:23:39.425 "name": "BaseBdev1", 00:23:39.425 "uuid": "9623fa48-6db4-4336-982d-de172ef1be47", 00:23:39.425 "is_configured": true, 00:23:39.425 "data_offset": 0, 00:23:39.425 "data_size": 65536 00:23:39.425 }, 00:23:39.425 { 00:23:39.425 "name": "BaseBdev2", 00:23:39.425 "uuid": "f9680a1e-2d05-404a-a775-427999754b78", 00:23:39.425 "is_configured": true, 00:23:39.425 "data_offset": 0, 00:23:39.425 "data_size": 65536 00:23:39.425 }, 00:23:39.425 { 00:23:39.425 "name": "BaseBdev3", 00:23:39.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.425 "is_configured": false, 00:23:39.425 "data_offset": 0, 00:23:39.425 "data_size": 0 00:23:39.425 }, 00:23:39.425 { 00:23:39.425 "name": "BaseBdev4", 00:23:39.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.425 "is_configured": false, 00:23:39.425 "data_offset": 0, 00:23:39.425 "data_size": 0 00:23:39.425 } 00:23:39.425 ] 00:23:39.425 }' 00:23:39.425 06:55:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:39.425 06:55:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.993 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:40.251 [2024-08-14 06:55:07.448672] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:40.251 BaseBdev3 00:23:40.251 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:40.251 06:55:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:23:40.251 06:55:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:40.251 06:55:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:40.252 06:55:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:40.252 06:55:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:40.252 06:55:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:40.513 06:55:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:40.774 [ 00:23:40.774 { 00:23:40.774 "name": "BaseBdev3", 00:23:40.774 "aliases": [ 00:23:40.774 "7ac76f88-f7b6-4b6d-8c56-6c431cc7c80c" 00:23:40.774 ], 00:23:40.774 "product_name": "Malloc disk", 00:23:40.774 "block_size": 512, 00:23:40.774 "num_blocks": 65536, 00:23:40.774 "uuid": "7ac76f88-f7b6-4b6d-8c56-6c431cc7c80c", 00:23:40.774 "assigned_rate_limits": { 00:23:40.774 "rw_ios_per_sec": 0, 00:23:40.774 "rw_mbytes_per_sec": 0, 00:23:40.774 "r_mbytes_per_sec": 0, 00:23:40.774 "w_mbytes_per_sec": 0 00:23:40.774 }, 00:23:40.774 "claimed": true, 00:23:40.774 "claim_type": "exclusive_write", 00:23:40.774 "zoned": false, 00:23:40.774 "supported_io_types": { 00:23:40.774 "read": true, 00:23:40.774 "write": true, 00:23:40.774 "unmap": true, 00:23:40.774 "flush": true, 00:23:40.774 "reset": true, 00:23:40.774 "nvme_admin": false, 00:23:40.774 "nvme_io": false, 00:23:40.774 "nvme_io_md": false, 00:23:40.775 "write_zeroes": true, 00:23:40.775 "zcopy": true, 00:23:40.775 "get_zone_info": false, 00:23:40.775 "zone_management": false, 00:23:40.775 "zone_append": false, 00:23:40.775 "compare": false, 00:23:40.775 "compare_and_write": false, 00:23:40.775 "abort": true, 00:23:40.775 "seek_hole": false, 00:23:40.775 "seek_data": false, 00:23:40.775 "copy": true, 00:23:40.775 "nvme_iov_md": false 00:23:40.775 }, 00:23:40.775 "memory_domains": [ 00:23:40.775 { 00:23:40.775 "dma_device_id": "system", 00:23:40.775 "dma_device_type": 1 00:23:40.775 }, 00:23:40.775 { 00:23:40.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:40.775 "dma_device_type": 2 00:23:40.775 } 00:23:40.775 ], 00:23:40.775 "driver_specific": {} 00:23:40.775 } 00:23:40.775 ] 00:23:40.775 06:55:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:40.775 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:40.775 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:40.775 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:40.775 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:40.775 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:40.775 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:40.775 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:40.775 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:40.775 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:40.775 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:40.775 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:40.775 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:40.775 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.775 06:55:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:41.034 06:55:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:41.034 "name": "Existed_Raid", 00:23:41.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.034 "strip_size_kb": 64, 00:23:41.034 "state": "configuring", 00:23:41.034 "raid_level": "raid5f", 00:23:41.034 "superblock": false, 00:23:41.034 "num_base_bdevs": 4, 00:23:41.034 "num_base_bdevs_discovered": 3, 00:23:41.034 "num_base_bdevs_operational": 4, 00:23:41.034 "base_bdevs_list": [ 00:23:41.034 { 00:23:41.034 "name": "BaseBdev1", 00:23:41.034 "uuid": "9623fa48-6db4-4336-982d-de172ef1be47", 00:23:41.034 "is_configured": true, 00:23:41.034 "data_offset": 0, 00:23:41.034 "data_size": 65536 00:23:41.034 }, 00:23:41.034 { 00:23:41.034 "name": "BaseBdev2", 00:23:41.034 "uuid": "f9680a1e-2d05-404a-a775-427999754b78", 00:23:41.034 "is_configured": true, 00:23:41.034 "data_offset": 0, 00:23:41.034 "data_size": 65536 00:23:41.034 }, 00:23:41.034 { 00:23:41.034 "name": "BaseBdev3", 00:23:41.034 "uuid": "7ac76f88-f7b6-4b6d-8c56-6c431cc7c80c", 00:23:41.034 "is_configured": true, 00:23:41.034 "data_offset": 0, 00:23:41.034 "data_size": 65536 00:23:41.034 }, 00:23:41.034 { 00:23:41.034 "name": "BaseBdev4", 00:23:41.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.034 "is_configured": false, 00:23:41.034 "data_offset": 0, 00:23:41.034 "data_size": 0 00:23:41.034 } 00:23:41.034 ] 00:23:41.034 }' 00:23:41.034 06:55:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:41.034 06:55:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.670 06:55:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:41.929 [2024-08-14 06:55:09.033451] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:41.929 [2024-08-14 06:55:09.033607] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:23:41.929 [2024-08-14 06:55:09.033639] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:41.929 [2024-08-14 06:55:09.033993] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:23:41.929 [2024-08-14 06:55:09.034636] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:23:41.929 [2024-08-14 06:55:09.034668] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:23:41.929 [2024-08-14 06:55:09.034909] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.929 BaseBdev4 00:23:41.929 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:41.929 06:55:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:23:41.929 06:55:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:41.929 06:55:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:41.929 06:55:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:41.929 06:55:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:41.929 06:55:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:42.188 06:55:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:42.447 [ 00:23:42.447 { 00:23:42.447 "name": "BaseBdev4", 00:23:42.447 "aliases": [ 00:23:42.447 "2d110f7b-8fa2-47a1-b126-4258b6b36f08" 00:23:42.447 ], 00:23:42.447 "product_name": "Malloc disk", 00:23:42.447 "block_size": 512, 00:23:42.447 "num_blocks": 65536, 00:23:42.447 "uuid": "2d110f7b-8fa2-47a1-b126-4258b6b36f08", 00:23:42.447 "assigned_rate_limits": { 00:23:42.447 "rw_ios_per_sec": 0, 00:23:42.447 "rw_mbytes_per_sec": 0, 00:23:42.447 "r_mbytes_per_sec": 0, 00:23:42.447 "w_mbytes_per_sec": 0 00:23:42.447 }, 00:23:42.447 "claimed": true, 00:23:42.447 "claim_type": "exclusive_write", 00:23:42.447 "zoned": false, 00:23:42.447 "supported_io_types": { 00:23:42.447 "read": true, 00:23:42.447 "write": true, 00:23:42.447 "unmap": true, 00:23:42.447 "flush": true, 00:23:42.447 "reset": true, 00:23:42.447 "nvme_admin": false, 00:23:42.447 "nvme_io": false, 00:23:42.447 "nvme_io_md": false, 00:23:42.447 "write_zeroes": true, 00:23:42.447 "zcopy": true, 00:23:42.447 "get_zone_info": false, 00:23:42.447 "zone_management": false, 00:23:42.447 "zone_append": false, 00:23:42.447 "compare": false, 00:23:42.447 "compare_and_write": false, 00:23:42.447 "abort": true, 00:23:42.447 "seek_hole": false, 00:23:42.447 "seek_data": false, 00:23:42.447 "copy": true, 00:23:42.447 "nvme_iov_md": false 00:23:42.447 }, 00:23:42.447 "memory_domains": [ 00:23:42.447 { 00:23:42.447 "dma_device_id": "system", 00:23:42.447 "dma_device_type": 1 00:23:42.447 }, 00:23:42.447 { 00:23:42.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:42.447 "dma_device_type": 2 00:23:42.447 } 00:23:42.447 ], 00:23:42.447 "driver_specific": {} 00:23:42.447 } 00:23:42.447 ] 00:23:42.447 06:55:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:42.447 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:42.447 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:42.447 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:42.447 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:42.447 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:42.447 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:42.447 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:42.447 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:42.447 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:42.447 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:42.447 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:42.447 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:42.447 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.447 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:42.707 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:42.707 "name": "Existed_Raid", 00:23:42.707 "uuid": "a6962af7-fb65-4cdb-afbc-058fc4e37ecf", 00:23:42.707 "strip_size_kb": 64, 00:23:42.707 "state": "online", 00:23:42.707 "raid_level": "raid5f", 00:23:42.707 "superblock": false, 00:23:42.707 "num_base_bdevs": 4, 00:23:42.707 "num_base_bdevs_discovered": 4, 00:23:42.707 "num_base_bdevs_operational": 4, 00:23:42.707 "base_bdevs_list": [ 00:23:42.707 { 00:23:42.707 "name": "BaseBdev1", 00:23:42.707 "uuid": "9623fa48-6db4-4336-982d-de172ef1be47", 00:23:42.707 "is_configured": true, 00:23:42.707 "data_offset": 0, 00:23:42.707 "data_size": 65536 00:23:42.707 }, 00:23:42.707 { 00:23:42.707 "name": "BaseBdev2", 00:23:42.707 "uuid": "f9680a1e-2d05-404a-a775-427999754b78", 00:23:42.707 "is_configured": true, 00:23:42.707 "data_offset": 0, 00:23:42.707 "data_size": 65536 00:23:42.707 }, 00:23:42.707 { 00:23:42.707 "name": "BaseBdev3", 00:23:42.707 "uuid": "7ac76f88-f7b6-4b6d-8c56-6c431cc7c80c", 00:23:42.707 "is_configured": true, 00:23:42.707 "data_offset": 0, 00:23:42.707 "data_size": 65536 00:23:42.707 }, 00:23:42.707 { 00:23:42.707 "name": "BaseBdev4", 00:23:42.707 "uuid": "2d110f7b-8fa2-47a1-b126-4258b6b36f08", 00:23:42.707 "is_configured": true, 00:23:42.707 "data_offset": 0, 00:23:42.707 "data_size": 65536 00:23:42.707 } 00:23:42.707 ] 00:23:42.707 }' 00:23:42.707 06:55:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:42.707 06:55:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.276 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:43.276 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:43.276 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:43.276 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:43.276 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:43.276 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:43.276 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:43.276 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:43.536 [2024-08-14 06:55:10.655156] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:43.536 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:43.536 "name": "Existed_Raid", 00:23:43.536 "aliases": [ 00:23:43.536 "a6962af7-fb65-4cdb-afbc-058fc4e37ecf" 00:23:43.536 ], 00:23:43.536 "product_name": "Raid Volume", 00:23:43.536 "block_size": 512, 00:23:43.536 "num_blocks": 196608, 00:23:43.536 "uuid": "a6962af7-fb65-4cdb-afbc-058fc4e37ecf", 00:23:43.536 "assigned_rate_limits": { 00:23:43.536 "rw_ios_per_sec": 0, 00:23:43.536 "rw_mbytes_per_sec": 0, 00:23:43.536 "r_mbytes_per_sec": 0, 00:23:43.536 "w_mbytes_per_sec": 0 00:23:43.536 }, 00:23:43.536 "claimed": false, 00:23:43.536 "zoned": false, 00:23:43.536 "supported_io_types": { 00:23:43.536 "read": true, 00:23:43.536 "write": true, 00:23:43.536 "unmap": false, 00:23:43.536 "flush": false, 00:23:43.536 "reset": true, 00:23:43.536 "nvme_admin": false, 00:23:43.536 "nvme_io": false, 00:23:43.536 "nvme_io_md": false, 00:23:43.536 "write_zeroes": true, 00:23:43.536 "zcopy": false, 00:23:43.536 "get_zone_info": false, 00:23:43.536 "zone_management": false, 00:23:43.536 "zone_append": false, 00:23:43.536 "compare": false, 00:23:43.536 "compare_and_write": false, 00:23:43.536 "abort": false, 00:23:43.536 "seek_hole": false, 00:23:43.536 "seek_data": false, 00:23:43.536 "copy": false, 00:23:43.536 "nvme_iov_md": false 00:23:43.536 }, 00:23:43.536 "driver_specific": { 00:23:43.536 "raid": { 00:23:43.536 "uuid": "a6962af7-fb65-4cdb-afbc-058fc4e37ecf", 00:23:43.536 "strip_size_kb": 64, 00:23:43.536 "state": "online", 00:23:43.536 "raid_level": "raid5f", 00:23:43.536 "superblock": false, 00:23:43.536 "num_base_bdevs": 4, 00:23:43.536 "num_base_bdevs_discovered": 4, 00:23:43.536 "num_base_bdevs_operational": 4, 00:23:43.536 "base_bdevs_list": [ 00:23:43.536 { 00:23:43.536 "name": "BaseBdev1", 00:23:43.536 "uuid": "9623fa48-6db4-4336-982d-de172ef1be47", 00:23:43.536 "is_configured": true, 00:23:43.536 "data_offset": 0, 00:23:43.536 "data_size": 65536 00:23:43.536 }, 00:23:43.536 { 00:23:43.536 "name": "BaseBdev2", 00:23:43.536 "uuid": "f9680a1e-2d05-404a-a775-427999754b78", 00:23:43.536 "is_configured": true, 00:23:43.536 "data_offset": 0, 00:23:43.536 "data_size": 65536 00:23:43.536 }, 00:23:43.536 { 00:23:43.536 "name": "BaseBdev3", 00:23:43.536 "uuid": "7ac76f88-f7b6-4b6d-8c56-6c431cc7c80c", 00:23:43.536 "is_configured": true, 00:23:43.536 "data_offset": 0, 00:23:43.536 "data_size": 65536 00:23:43.536 }, 00:23:43.536 { 00:23:43.536 "name": "BaseBdev4", 00:23:43.536 "uuid": "2d110f7b-8fa2-47a1-b126-4258b6b36f08", 00:23:43.536 "is_configured": true, 00:23:43.536 "data_offset": 0, 00:23:43.536 "data_size": 65536 00:23:43.536 } 00:23:43.536 ] 00:23:43.536 } 00:23:43.536 } 00:23:43.536 }' 00:23:43.536 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:43.536 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:43.536 BaseBdev2 00:23:43.536 BaseBdev3 00:23:43.536 BaseBdev4' 00:23:43.537 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:43.537 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:43.537 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:43.797 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:43.797 "name": "BaseBdev1", 00:23:43.797 "aliases": [ 00:23:43.797 "9623fa48-6db4-4336-982d-de172ef1be47" 00:23:43.797 ], 00:23:43.797 "product_name": "Malloc disk", 00:23:43.797 "block_size": 512, 00:23:43.797 "num_blocks": 65536, 00:23:43.797 "uuid": "9623fa48-6db4-4336-982d-de172ef1be47", 00:23:43.797 "assigned_rate_limits": { 00:23:43.797 "rw_ios_per_sec": 0, 00:23:43.797 "rw_mbytes_per_sec": 0, 00:23:43.797 "r_mbytes_per_sec": 0, 00:23:43.797 "w_mbytes_per_sec": 0 00:23:43.797 }, 00:23:43.797 "claimed": true, 00:23:43.797 "claim_type": "exclusive_write", 00:23:43.797 "zoned": false, 00:23:43.797 "supported_io_types": { 00:23:43.797 "read": true, 00:23:43.797 "write": true, 00:23:43.797 "unmap": true, 00:23:43.797 "flush": true, 00:23:43.797 "reset": true, 00:23:43.797 "nvme_admin": false, 00:23:43.797 "nvme_io": false, 00:23:43.797 "nvme_io_md": false, 00:23:43.797 "write_zeroes": true, 00:23:43.797 "zcopy": true, 00:23:43.797 "get_zone_info": false, 00:23:43.797 "zone_management": false, 00:23:43.797 "zone_append": false, 00:23:43.797 "compare": false, 00:23:43.797 "compare_and_write": false, 00:23:43.797 "abort": true, 00:23:43.797 "seek_hole": false, 00:23:43.797 "seek_data": false, 00:23:43.797 "copy": true, 00:23:43.797 "nvme_iov_md": false 00:23:43.797 }, 00:23:43.797 "memory_domains": [ 00:23:43.797 { 00:23:43.797 "dma_device_id": "system", 00:23:43.797 "dma_device_type": 1 00:23:43.797 }, 00:23:43.797 { 00:23:43.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:43.797 "dma_device_type": 2 00:23:43.797 } 00:23:43.797 ], 00:23:43.797 "driver_specific": {} 00:23:43.797 }' 00:23:43.797 06:55:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:43.797 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:44.056 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:44.056 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:44.056 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:44.056 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:44.056 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:44.056 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:44.056 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:44.056 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:44.056 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:44.315 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:44.315 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:44.315 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:44.315 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:44.575 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:44.575 "name": "BaseBdev2", 00:23:44.575 "aliases": [ 00:23:44.575 "f9680a1e-2d05-404a-a775-427999754b78" 00:23:44.575 ], 00:23:44.575 "product_name": "Malloc disk", 00:23:44.575 "block_size": 512, 00:23:44.575 "num_blocks": 65536, 00:23:44.575 "uuid": "f9680a1e-2d05-404a-a775-427999754b78", 00:23:44.575 "assigned_rate_limits": { 00:23:44.575 "rw_ios_per_sec": 0, 00:23:44.575 "rw_mbytes_per_sec": 0, 00:23:44.575 "r_mbytes_per_sec": 0, 00:23:44.575 "w_mbytes_per_sec": 0 00:23:44.575 }, 00:23:44.575 "claimed": true, 00:23:44.575 "claim_type": "exclusive_write", 00:23:44.575 "zoned": false, 00:23:44.575 "supported_io_types": { 00:23:44.575 "read": true, 00:23:44.575 "write": true, 00:23:44.575 "unmap": true, 00:23:44.575 "flush": true, 00:23:44.575 "reset": true, 00:23:44.575 "nvme_admin": false, 00:23:44.575 "nvme_io": false, 00:23:44.575 "nvme_io_md": false, 00:23:44.575 "write_zeroes": true, 00:23:44.575 "zcopy": true, 00:23:44.575 "get_zone_info": false, 00:23:44.575 "zone_management": false, 00:23:44.575 "zone_append": false, 00:23:44.575 "compare": false, 00:23:44.575 "compare_and_write": false, 00:23:44.575 "abort": true, 00:23:44.575 "seek_hole": false, 00:23:44.575 "seek_data": false, 00:23:44.575 "copy": true, 00:23:44.575 "nvme_iov_md": false 00:23:44.575 }, 00:23:44.575 "memory_domains": [ 00:23:44.575 { 00:23:44.575 "dma_device_id": "system", 00:23:44.575 "dma_device_type": 1 00:23:44.575 }, 00:23:44.575 { 00:23:44.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.575 "dma_device_type": 2 00:23:44.575 } 00:23:44.575 ], 00:23:44.575 "driver_specific": {} 00:23:44.575 }' 00:23:44.575 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:44.575 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:44.575 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:44.575 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:44.575 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:44.575 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:44.575 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:44.575 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:44.834 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:44.834 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:44.834 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:44.834 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:44.834 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:44.834 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:44.834 06:55:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:45.093 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:45.093 "name": "BaseBdev3", 00:23:45.093 "aliases": [ 00:23:45.093 "7ac76f88-f7b6-4b6d-8c56-6c431cc7c80c" 00:23:45.093 ], 00:23:45.093 "product_name": "Malloc disk", 00:23:45.093 "block_size": 512, 00:23:45.093 "num_blocks": 65536, 00:23:45.093 "uuid": "7ac76f88-f7b6-4b6d-8c56-6c431cc7c80c", 00:23:45.093 "assigned_rate_limits": { 00:23:45.093 "rw_ios_per_sec": 0, 00:23:45.093 "rw_mbytes_per_sec": 0, 00:23:45.093 "r_mbytes_per_sec": 0, 00:23:45.093 "w_mbytes_per_sec": 0 00:23:45.093 }, 00:23:45.093 "claimed": true, 00:23:45.093 "claim_type": "exclusive_write", 00:23:45.093 "zoned": false, 00:23:45.093 "supported_io_types": { 00:23:45.093 "read": true, 00:23:45.093 "write": true, 00:23:45.093 "unmap": true, 00:23:45.093 "flush": true, 00:23:45.093 "reset": true, 00:23:45.093 "nvme_admin": false, 00:23:45.093 "nvme_io": false, 00:23:45.093 "nvme_io_md": false, 00:23:45.093 "write_zeroes": true, 00:23:45.093 "zcopy": true, 00:23:45.093 "get_zone_info": false, 00:23:45.093 "zone_management": false, 00:23:45.093 "zone_append": false, 00:23:45.093 "compare": false, 00:23:45.093 "compare_and_write": false, 00:23:45.093 "abort": true, 00:23:45.093 "seek_hole": false, 00:23:45.093 "seek_data": false, 00:23:45.093 "copy": true, 00:23:45.093 "nvme_iov_md": false 00:23:45.093 }, 00:23:45.093 "memory_domains": [ 00:23:45.093 { 00:23:45.093 "dma_device_id": "system", 00:23:45.093 "dma_device_type": 1 00:23:45.093 }, 00:23:45.093 { 00:23:45.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.093 "dma_device_type": 2 00:23:45.093 } 00:23:45.093 ], 00:23:45.093 "driver_specific": {} 00:23:45.093 }' 00:23:45.093 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:45.093 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:45.093 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:45.093 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:45.353 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:45.353 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:45.353 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:45.353 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:45.353 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:45.353 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:45.353 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:45.353 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:45.353 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:45.353 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:45.354 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:45.613 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:45.613 "name": "BaseBdev4", 00:23:45.614 "aliases": [ 00:23:45.614 "2d110f7b-8fa2-47a1-b126-4258b6b36f08" 00:23:45.614 ], 00:23:45.614 "product_name": "Malloc disk", 00:23:45.614 "block_size": 512, 00:23:45.614 "num_blocks": 65536, 00:23:45.614 "uuid": "2d110f7b-8fa2-47a1-b126-4258b6b36f08", 00:23:45.614 "assigned_rate_limits": { 00:23:45.614 "rw_ios_per_sec": 0, 00:23:45.614 "rw_mbytes_per_sec": 0, 00:23:45.614 "r_mbytes_per_sec": 0, 00:23:45.614 "w_mbytes_per_sec": 0 00:23:45.614 }, 00:23:45.614 "claimed": true, 00:23:45.614 "claim_type": "exclusive_write", 00:23:45.614 "zoned": false, 00:23:45.614 "supported_io_types": { 00:23:45.614 "read": true, 00:23:45.614 "write": true, 00:23:45.614 "unmap": true, 00:23:45.614 "flush": true, 00:23:45.614 "reset": true, 00:23:45.614 "nvme_admin": false, 00:23:45.614 "nvme_io": false, 00:23:45.614 "nvme_io_md": false, 00:23:45.614 "write_zeroes": true, 00:23:45.614 "zcopy": true, 00:23:45.614 "get_zone_info": false, 00:23:45.614 "zone_management": false, 00:23:45.614 "zone_append": false, 00:23:45.614 "compare": false, 00:23:45.614 "compare_and_write": false, 00:23:45.614 "abort": true, 00:23:45.614 "seek_hole": false, 00:23:45.614 "seek_data": false, 00:23:45.614 "copy": true, 00:23:45.614 "nvme_iov_md": false 00:23:45.614 }, 00:23:45.614 "memory_domains": [ 00:23:45.614 { 00:23:45.614 "dma_device_id": "system", 00:23:45.614 "dma_device_type": 1 00:23:45.614 }, 00:23:45.614 { 00:23:45.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.614 "dma_device_type": 2 00:23:45.614 } 00:23:45.614 ], 00:23:45.614 "driver_specific": {} 00:23:45.614 }' 00:23:45.614 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:45.873 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:45.873 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:45.873 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:45.873 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:45.873 06:55:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:45.873 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:45.873 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:45.873 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:45.873 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:45.873 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.131 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:46.131 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:46.131 [2024-08-14 06:55:13.382544] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:46.390 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.648 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:46.648 "name": "Existed_Raid", 00:23:46.648 "uuid": "a6962af7-fb65-4cdb-afbc-058fc4e37ecf", 00:23:46.648 "strip_size_kb": 64, 00:23:46.648 "state": "online", 00:23:46.648 "raid_level": "raid5f", 00:23:46.648 "superblock": false, 00:23:46.648 "num_base_bdevs": 4, 00:23:46.648 "num_base_bdevs_discovered": 3, 00:23:46.648 "num_base_bdevs_operational": 3, 00:23:46.648 "base_bdevs_list": [ 00:23:46.648 { 00:23:46.648 "name": null, 00:23:46.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.648 "is_configured": false, 00:23:46.648 "data_offset": 0, 00:23:46.648 "data_size": 65536 00:23:46.648 }, 00:23:46.648 { 00:23:46.648 "name": "BaseBdev2", 00:23:46.648 "uuid": "f9680a1e-2d05-404a-a775-427999754b78", 00:23:46.648 "is_configured": true, 00:23:46.648 "data_offset": 0, 00:23:46.648 "data_size": 65536 00:23:46.648 }, 00:23:46.648 { 00:23:46.648 "name": "BaseBdev3", 00:23:46.648 "uuid": "7ac76f88-f7b6-4b6d-8c56-6c431cc7c80c", 00:23:46.648 "is_configured": true, 00:23:46.648 "data_offset": 0, 00:23:46.648 "data_size": 65536 00:23:46.648 }, 00:23:46.648 { 00:23:46.648 "name": "BaseBdev4", 00:23:46.648 "uuid": "2d110f7b-8fa2-47a1-b126-4258b6b36f08", 00:23:46.648 "is_configured": true, 00:23:46.648 "data_offset": 0, 00:23:46.648 "data_size": 65536 00:23:46.648 } 00:23:46.648 ] 00:23:46.648 }' 00:23:46.648 06:55:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:46.648 06:55:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.214 06:55:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:47.214 06:55:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:47.214 06:55:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:47.214 06:55:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.472 06:55:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:47.472 06:55:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:47.472 06:55:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:47.730 [2024-08-14 06:55:14.828386] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:47.730 [2024-08-14 06:55:14.828622] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:47.730 [2024-08-14 06:55:14.840382] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:47.730 06:55:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:47.730 06:55:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:47.730 06:55:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:47.730 06:55:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.989 06:55:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:47.989 06:55:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:47.989 06:55:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:48.249 [2024-08-14 06:55:15.355649] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:48.249 06:55:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:48.249 06:55:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:48.249 06:55:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:48.249 06:55:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.508 06:55:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:48.508 06:55:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:48.508 06:55:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:48.767 [2024-08-14 06:55:15.958774] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:48.767 [2024-08-14 06:55:15.958948] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:23:48.767 06:55:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:48.767 06:55:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:48.768 06:55:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.768 06:55:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:49.026 06:55:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:49.026 06:55:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:49.026 06:55:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:49.026 06:55:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:49.026 06:55:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:49.026 06:55:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:49.285 BaseBdev2 00:23:49.285 06:55:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:49.285 06:55:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:23:49.285 06:55:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:49.285 06:55:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:49.285 06:55:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:49.285 06:55:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:49.285 06:55:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:49.544 06:55:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:49.804 [ 00:23:49.804 { 00:23:49.804 "name": "BaseBdev2", 00:23:49.804 "aliases": [ 00:23:49.804 "5fd50144-c301-4bef-8e2c-d8fe221685c8" 00:23:49.804 ], 00:23:49.804 "product_name": "Malloc disk", 00:23:49.804 "block_size": 512, 00:23:49.804 "num_blocks": 65536, 00:23:49.804 "uuid": "5fd50144-c301-4bef-8e2c-d8fe221685c8", 00:23:49.804 "assigned_rate_limits": { 00:23:49.804 "rw_ios_per_sec": 0, 00:23:49.804 "rw_mbytes_per_sec": 0, 00:23:49.804 "r_mbytes_per_sec": 0, 00:23:49.804 "w_mbytes_per_sec": 0 00:23:49.804 }, 00:23:49.804 "claimed": false, 00:23:49.804 "zoned": false, 00:23:49.804 "supported_io_types": { 00:23:49.804 "read": true, 00:23:49.804 "write": true, 00:23:49.804 "unmap": true, 00:23:49.804 "flush": true, 00:23:49.804 "reset": true, 00:23:49.804 "nvme_admin": false, 00:23:49.804 "nvme_io": false, 00:23:49.804 "nvme_io_md": false, 00:23:49.804 "write_zeroes": true, 00:23:49.804 "zcopy": true, 00:23:49.804 "get_zone_info": false, 00:23:49.804 "zone_management": false, 00:23:49.804 "zone_append": false, 00:23:49.804 "compare": false, 00:23:49.804 "compare_and_write": false, 00:23:49.804 "abort": true, 00:23:49.804 "seek_hole": false, 00:23:49.804 "seek_data": false, 00:23:49.804 "copy": true, 00:23:49.804 "nvme_iov_md": false 00:23:49.804 }, 00:23:49.804 "memory_domains": [ 00:23:49.804 { 00:23:49.804 "dma_device_id": "system", 00:23:49.804 "dma_device_type": 1 00:23:49.804 }, 00:23:49.804 { 00:23:49.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:49.804 "dma_device_type": 2 00:23:49.804 } 00:23:49.804 ], 00:23:49.804 "driver_specific": {} 00:23:49.804 } 00:23:49.804 ] 00:23:50.064 06:55:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:50.064 06:55:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:50.064 06:55:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:50.064 06:55:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:50.064 BaseBdev3 00:23:50.064 06:55:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:50.064 06:55:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:23:50.064 06:55:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:50.064 06:55:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:50.064 06:55:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:50.064 06:55:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:50.064 06:55:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:50.323 06:55:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:50.582 [ 00:23:50.582 { 00:23:50.582 "name": "BaseBdev3", 00:23:50.582 "aliases": [ 00:23:50.582 "212f68d8-8943-4d4f-baf0-b4eebb96784c" 00:23:50.582 ], 00:23:50.582 "product_name": "Malloc disk", 00:23:50.582 "block_size": 512, 00:23:50.582 "num_blocks": 65536, 00:23:50.582 "uuid": "212f68d8-8943-4d4f-baf0-b4eebb96784c", 00:23:50.582 "assigned_rate_limits": { 00:23:50.582 "rw_ios_per_sec": 0, 00:23:50.582 "rw_mbytes_per_sec": 0, 00:23:50.582 "r_mbytes_per_sec": 0, 00:23:50.582 "w_mbytes_per_sec": 0 00:23:50.582 }, 00:23:50.582 "claimed": false, 00:23:50.582 "zoned": false, 00:23:50.582 "supported_io_types": { 00:23:50.582 "read": true, 00:23:50.582 "write": true, 00:23:50.582 "unmap": true, 00:23:50.583 "flush": true, 00:23:50.583 "reset": true, 00:23:50.583 "nvme_admin": false, 00:23:50.583 "nvme_io": false, 00:23:50.583 "nvme_io_md": false, 00:23:50.583 "write_zeroes": true, 00:23:50.583 "zcopy": true, 00:23:50.583 "get_zone_info": false, 00:23:50.583 "zone_management": false, 00:23:50.583 "zone_append": false, 00:23:50.583 "compare": false, 00:23:50.583 "compare_and_write": false, 00:23:50.583 "abort": true, 00:23:50.583 "seek_hole": false, 00:23:50.583 "seek_data": false, 00:23:50.583 "copy": true, 00:23:50.583 "nvme_iov_md": false 00:23:50.583 }, 00:23:50.583 "memory_domains": [ 00:23:50.583 { 00:23:50.583 "dma_device_id": "system", 00:23:50.583 "dma_device_type": 1 00:23:50.583 }, 00:23:50.583 { 00:23:50.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.583 "dma_device_type": 2 00:23:50.583 } 00:23:50.583 ], 00:23:50.583 "driver_specific": {} 00:23:50.583 } 00:23:50.583 ] 00:23:50.583 06:55:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:50.583 06:55:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:50.583 06:55:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:50.583 06:55:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:50.842 BaseBdev4 00:23:50.842 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:50.842 06:55:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:23:50.842 06:55:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:50.842 06:55:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:50.842 06:55:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:50.842 06:55:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:50.842 06:55:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:51.101 06:55:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:51.395 [ 00:23:51.395 { 00:23:51.395 "name": "BaseBdev4", 00:23:51.395 "aliases": [ 00:23:51.395 "ffd5d996-4eee-423f-8f7a-36497cbe5059" 00:23:51.395 ], 00:23:51.395 "product_name": "Malloc disk", 00:23:51.395 "block_size": 512, 00:23:51.395 "num_blocks": 65536, 00:23:51.395 "uuid": "ffd5d996-4eee-423f-8f7a-36497cbe5059", 00:23:51.395 "assigned_rate_limits": { 00:23:51.395 "rw_ios_per_sec": 0, 00:23:51.395 "rw_mbytes_per_sec": 0, 00:23:51.395 "r_mbytes_per_sec": 0, 00:23:51.395 "w_mbytes_per_sec": 0 00:23:51.395 }, 00:23:51.395 "claimed": false, 00:23:51.395 "zoned": false, 00:23:51.395 "supported_io_types": { 00:23:51.395 "read": true, 00:23:51.396 "write": true, 00:23:51.396 "unmap": true, 00:23:51.396 "flush": true, 00:23:51.396 "reset": true, 00:23:51.396 "nvme_admin": false, 00:23:51.396 "nvme_io": false, 00:23:51.396 "nvme_io_md": false, 00:23:51.396 "write_zeroes": true, 00:23:51.396 "zcopy": true, 00:23:51.396 "get_zone_info": false, 00:23:51.396 "zone_management": false, 00:23:51.396 "zone_append": false, 00:23:51.396 "compare": false, 00:23:51.396 "compare_and_write": false, 00:23:51.396 "abort": true, 00:23:51.396 "seek_hole": false, 00:23:51.396 "seek_data": false, 00:23:51.396 "copy": true, 00:23:51.396 "nvme_iov_md": false 00:23:51.396 }, 00:23:51.396 "memory_domains": [ 00:23:51.396 { 00:23:51.396 "dma_device_id": "system", 00:23:51.396 "dma_device_type": 1 00:23:51.396 }, 00:23:51.396 { 00:23:51.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.396 "dma_device_type": 2 00:23:51.396 } 00:23:51.396 ], 00:23:51.396 "driver_specific": {} 00:23:51.396 } 00:23:51.396 ] 00:23:51.396 06:55:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:51.396 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:51.396 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:51.396 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:51.656 [2024-08-14 06:55:18.762938] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:51.656 [2024-08-14 06:55:18.763106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:51.656 [2024-08-14 06:55:18.763142] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:51.656 [2024-08-14 06:55:18.765370] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:51.656 [2024-08-14 06:55:18.765494] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:51.656 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:51.656 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:51.656 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:51.657 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:51.657 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:51.657 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:51.657 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.657 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.657 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.657 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.657 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.657 06:55:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.915 06:55:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:51.915 "name": "Existed_Raid", 00:23:51.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.915 "strip_size_kb": 64, 00:23:51.915 "state": "configuring", 00:23:51.915 "raid_level": "raid5f", 00:23:51.915 "superblock": false, 00:23:51.915 "num_base_bdevs": 4, 00:23:51.915 "num_base_bdevs_discovered": 3, 00:23:51.915 "num_base_bdevs_operational": 4, 00:23:51.915 "base_bdevs_list": [ 00:23:51.915 { 00:23:51.915 "name": "BaseBdev1", 00:23:51.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.915 "is_configured": false, 00:23:51.915 "data_offset": 0, 00:23:51.915 "data_size": 0 00:23:51.915 }, 00:23:51.915 { 00:23:51.915 "name": "BaseBdev2", 00:23:51.915 "uuid": "5fd50144-c301-4bef-8e2c-d8fe221685c8", 00:23:51.915 "is_configured": true, 00:23:51.915 "data_offset": 0, 00:23:51.915 "data_size": 65536 00:23:51.915 }, 00:23:51.915 { 00:23:51.915 "name": "BaseBdev3", 00:23:51.915 "uuid": "212f68d8-8943-4d4f-baf0-b4eebb96784c", 00:23:51.915 "is_configured": true, 00:23:51.915 "data_offset": 0, 00:23:51.915 "data_size": 65536 00:23:51.915 }, 00:23:51.915 { 00:23:51.915 "name": "BaseBdev4", 00:23:51.915 "uuid": "ffd5d996-4eee-423f-8f7a-36497cbe5059", 00:23:51.915 "is_configured": true, 00:23:51.915 "data_offset": 0, 00:23:51.915 "data_size": 65536 00:23:51.915 } 00:23:51.915 ] 00:23:51.915 }' 00:23:51.915 06:55:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:51.915 06:55:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.483 06:55:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:52.742 [2024-08-14 06:55:19.958124] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:52.742 06:55:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:52.742 06:55:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:52.742 06:55:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:52.742 06:55:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:52.742 06:55:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:52.742 06:55:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:52.742 06:55:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:52.742 06:55:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:52.742 06:55:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:52.742 06:55:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:52.742 06:55:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.742 06:55:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:53.310 06:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:53.310 "name": "Existed_Raid", 00:23:53.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.310 "strip_size_kb": 64, 00:23:53.310 "state": "configuring", 00:23:53.310 "raid_level": "raid5f", 00:23:53.310 "superblock": false, 00:23:53.310 "num_base_bdevs": 4, 00:23:53.310 "num_base_bdevs_discovered": 2, 00:23:53.310 "num_base_bdevs_operational": 4, 00:23:53.310 "base_bdevs_list": [ 00:23:53.310 { 00:23:53.310 "name": "BaseBdev1", 00:23:53.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.310 "is_configured": false, 00:23:53.310 "data_offset": 0, 00:23:53.310 "data_size": 0 00:23:53.310 }, 00:23:53.310 { 00:23:53.310 "name": null, 00:23:53.310 "uuid": "5fd50144-c301-4bef-8e2c-d8fe221685c8", 00:23:53.310 "is_configured": false, 00:23:53.310 "data_offset": 0, 00:23:53.310 "data_size": 65536 00:23:53.310 }, 00:23:53.310 { 00:23:53.310 "name": "BaseBdev3", 00:23:53.310 "uuid": "212f68d8-8943-4d4f-baf0-b4eebb96784c", 00:23:53.310 "is_configured": true, 00:23:53.310 "data_offset": 0, 00:23:53.310 "data_size": 65536 00:23:53.310 }, 00:23:53.310 { 00:23:53.310 "name": "BaseBdev4", 00:23:53.310 "uuid": "ffd5d996-4eee-423f-8f7a-36497cbe5059", 00:23:53.310 "is_configured": true, 00:23:53.310 "data_offset": 0, 00:23:53.310 "data_size": 65536 00:23:53.310 } 00:23:53.310 ] 00:23:53.310 }' 00:23:53.310 06:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:53.310 06:55:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.878 06:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:53.878 06:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.137 06:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:54.138 06:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:54.138 [2024-08-14 06:55:21.371227] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:54.138 BaseBdev1 00:23:54.138 06:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:54.138 06:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:23:54.138 06:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:54.138 06:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:54.138 06:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:54.138 06:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:54.138 06:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:54.397 06:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:54.664 [ 00:23:54.664 { 00:23:54.664 "name": "BaseBdev1", 00:23:54.664 "aliases": [ 00:23:54.664 "84a42201-3685-48f7-8850-4e84709ce041" 00:23:54.664 ], 00:23:54.664 "product_name": "Malloc disk", 00:23:54.664 "block_size": 512, 00:23:54.664 "num_blocks": 65536, 00:23:54.664 "uuid": "84a42201-3685-48f7-8850-4e84709ce041", 00:23:54.664 "assigned_rate_limits": { 00:23:54.664 "rw_ios_per_sec": 0, 00:23:54.664 "rw_mbytes_per_sec": 0, 00:23:54.664 "r_mbytes_per_sec": 0, 00:23:54.664 "w_mbytes_per_sec": 0 00:23:54.664 }, 00:23:54.664 "claimed": true, 00:23:54.664 "claim_type": "exclusive_write", 00:23:54.664 "zoned": false, 00:23:54.664 "supported_io_types": { 00:23:54.664 "read": true, 00:23:54.664 "write": true, 00:23:54.664 "unmap": true, 00:23:54.664 "flush": true, 00:23:54.664 "reset": true, 00:23:54.664 "nvme_admin": false, 00:23:54.664 "nvme_io": false, 00:23:54.664 "nvme_io_md": false, 00:23:54.664 "write_zeroes": true, 00:23:54.664 "zcopy": true, 00:23:54.664 "get_zone_info": false, 00:23:54.664 "zone_management": false, 00:23:54.664 "zone_append": false, 00:23:54.664 "compare": false, 00:23:54.664 "compare_and_write": false, 00:23:54.664 "abort": true, 00:23:54.664 "seek_hole": false, 00:23:54.664 "seek_data": false, 00:23:54.664 "copy": true, 00:23:54.664 "nvme_iov_md": false 00:23:54.664 }, 00:23:54.664 "memory_domains": [ 00:23:54.664 { 00:23:54.664 "dma_device_id": "system", 00:23:54.664 "dma_device_type": 1 00:23:54.664 }, 00:23:54.664 { 00:23:54.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.664 "dma_device_type": 2 00:23:54.664 } 00:23:54.664 ], 00:23:54.664 "driver_specific": {} 00:23:54.664 } 00:23:54.664 ] 00:23:54.664 06:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:54.664 06:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:54.664 06:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:54.664 06:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:54.664 06:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:54.664 06:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:54.664 06:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:54.664 06:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:54.664 06:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:54.664 06:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:54.664 06:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:54.664 06:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:54.664 06:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.938 06:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:54.938 "name": "Existed_Raid", 00:23:54.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.938 "strip_size_kb": 64, 00:23:54.938 "state": "configuring", 00:23:54.938 "raid_level": "raid5f", 00:23:54.938 "superblock": false, 00:23:54.938 "num_base_bdevs": 4, 00:23:54.938 "num_base_bdevs_discovered": 3, 00:23:54.938 "num_base_bdevs_operational": 4, 00:23:54.938 "base_bdevs_list": [ 00:23:54.938 { 00:23:54.938 "name": "BaseBdev1", 00:23:54.938 "uuid": "84a42201-3685-48f7-8850-4e84709ce041", 00:23:54.938 "is_configured": true, 00:23:54.938 "data_offset": 0, 00:23:54.938 "data_size": 65536 00:23:54.938 }, 00:23:54.938 { 00:23:54.938 "name": null, 00:23:54.938 "uuid": "5fd50144-c301-4bef-8e2c-d8fe221685c8", 00:23:54.938 "is_configured": false, 00:23:54.938 "data_offset": 0, 00:23:54.938 "data_size": 65536 00:23:54.938 }, 00:23:54.938 { 00:23:54.938 "name": "BaseBdev3", 00:23:54.938 "uuid": "212f68d8-8943-4d4f-baf0-b4eebb96784c", 00:23:54.938 "is_configured": true, 00:23:54.938 "data_offset": 0, 00:23:54.938 "data_size": 65536 00:23:54.938 }, 00:23:54.938 { 00:23:54.938 "name": "BaseBdev4", 00:23:54.938 "uuid": "ffd5d996-4eee-423f-8f7a-36497cbe5059", 00:23:54.938 "is_configured": true, 00:23:54.938 "data_offset": 0, 00:23:54.938 "data_size": 65536 00:23:54.938 } 00:23:54.938 ] 00:23:54.938 }' 00:23:54.938 06:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:54.938 06:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.507 06:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:55.507 06:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.766 06:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:55.766 06:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:56.027 [2024-08-14 06:55:23.212345] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:56.027 06:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:56.027 06:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:56.027 06:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:56.027 06:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:56.027 06:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:56.027 06:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:56.027 06:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:56.027 06:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:56.027 06:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:56.027 06:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:56.027 06:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:56.027 06:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.286 06:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:56.286 "name": "Existed_Raid", 00:23:56.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:56.286 "strip_size_kb": 64, 00:23:56.286 "state": "configuring", 00:23:56.286 "raid_level": "raid5f", 00:23:56.286 "superblock": false, 00:23:56.286 "num_base_bdevs": 4, 00:23:56.286 "num_base_bdevs_discovered": 2, 00:23:56.286 "num_base_bdevs_operational": 4, 00:23:56.286 "base_bdevs_list": [ 00:23:56.286 { 00:23:56.286 "name": "BaseBdev1", 00:23:56.286 "uuid": "84a42201-3685-48f7-8850-4e84709ce041", 00:23:56.286 "is_configured": true, 00:23:56.286 "data_offset": 0, 00:23:56.286 "data_size": 65536 00:23:56.286 }, 00:23:56.286 { 00:23:56.286 "name": null, 00:23:56.286 "uuid": "5fd50144-c301-4bef-8e2c-d8fe221685c8", 00:23:56.286 "is_configured": false, 00:23:56.286 "data_offset": 0, 00:23:56.286 "data_size": 65536 00:23:56.286 }, 00:23:56.286 { 00:23:56.286 "name": null, 00:23:56.286 "uuid": "212f68d8-8943-4d4f-baf0-b4eebb96784c", 00:23:56.286 "is_configured": false, 00:23:56.286 "data_offset": 0, 00:23:56.286 "data_size": 65536 00:23:56.286 }, 00:23:56.286 { 00:23:56.286 "name": "BaseBdev4", 00:23:56.286 "uuid": "ffd5d996-4eee-423f-8f7a-36497cbe5059", 00:23:56.286 "is_configured": true, 00:23:56.286 "data_offset": 0, 00:23:56.286 "data_size": 65536 00:23:56.286 } 00:23:56.286 ] 00:23:56.286 }' 00:23:56.286 06:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:56.286 06:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.858 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:56.858 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.117 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:57.117 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:57.376 [2024-08-14 06:55:24.518403] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:57.376 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:57.376 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:57.376 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:57.376 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:57.376 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:57.376 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:57.376 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:57.376 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:57.376 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:57.376 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:57.376 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:57.376 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.634 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:57.634 "name": "Existed_Raid", 00:23:57.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.634 "strip_size_kb": 64, 00:23:57.634 "state": "configuring", 00:23:57.634 "raid_level": "raid5f", 00:23:57.634 "superblock": false, 00:23:57.634 "num_base_bdevs": 4, 00:23:57.634 "num_base_bdevs_discovered": 3, 00:23:57.634 "num_base_bdevs_operational": 4, 00:23:57.634 "base_bdevs_list": [ 00:23:57.634 { 00:23:57.634 "name": "BaseBdev1", 00:23:57.634 "uuid": "84a42201-3685-48f7-8850-4e84709ce041", 00:23:57.634 "is_configured": true, 00:23:57.634 "data_offset": 0, 00:23:57.634 "data_size": 65536 00:23:57.634 }, 00:23:57.634 { 00:23:57.634 "name": null, 00:23:57.634 "uuid": "5fd50144-c301-4bef-8e2c-d8fe221685c8", 00:23:57.634 "is_configured": false, 00:23:57.634 "data_offset": 0, 00:23:57.634 "data_size": 65536 00:23:57.634 }, 00:23:57.634 { 00:23:57.634 "name": "BaseBdev3", 00:23:57.634 "uuid": "212f68d8-8943-4d4f-baf0-b4eebb96784c", 00:23:57.634 "is_configured": true, 00:23:57.634 "data_offset": 0, 00:23:57.634 "data_size": 65536 00:23:57.634 }, 00:23:57.634 { 00:23:57.634 "name": "BaseBdev4", 00:23:57.634 "uuid": "ffd5d996-4eee-423f-8f7a-36497cbe5059", 00:23:57.634 "is_configured": true, 00:23:57.634 "data_offset": 0, 00:23:57.634 "data_size": 65536 00:23:57.634 } 00:23:57.634 ] 00:23:57.634 }' 00:23:57.634 06:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:57.634 06:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.208 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.208 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:58.468 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:58.468 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:58.728 [2024-08-14 06:55:25.804348] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:58.728 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:58.728 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:58.728 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:58.728 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:58.728 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:58.728 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:58.728 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.728 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.728 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.728 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.728 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.728 06:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:58.988 06:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:58.988 "name": "Existed_Raid", 00:23:58.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.988 "strip_size_kb": 64, 00:23:58.988 "state": "configuring", 00:23:58.988 "raid_level": "raid5f", 00:23:58.988 "superblock": false, 00:23:58.988 "num_base_bdevs": 4, 00:23:58.988 "num_base_bdevs_discovered": 2, 00:23:58.988 "num_base_bdevs_operational": 4, 00:23:58.988 "base_bdevs_list": [ 00:23:58.988 { 00:23:58.988 "name": null, 00:23:58.988 "uuid": "84a42201-3685-48f7-8850-4e84709ce041", 00:23:58.988 "is_configured": false, 00:23:58.988 "data_offset": 0, 00:23:58.988 "data_size": 65536 00:23:58.988 }, 00:23:58.988 { 00:23:58.988 "name": null, 00:23:58.988 "uuid": "5fd50144-c301-4bef-8e2c-d8fe221685c8", 00:23:58.988 "is_configured": false, 00:23:58.988 "data_offset": 0, 00:23:58.988 "data_size": 65536 00:23:58.988 }, 00:23:58.988 { 00:23:58.988 "name": "BaseBdev3", 00:23:58.988 "uuid": "212f68d8-8943-4d4f-baf0-b4eebb96784c", 00:23:58.988 "is_configured": true, 00:23:58.988 "data_offset": 0, 00:23:58.988 "data_size": 65536 00:23:58.988 }, 00:23:58.988 { 00:23:58.988 "name": "BaseBdev4", 00:23:58.988 "uuid": "ffd5d996-4eee-423f-8f7a-36497cbe5059", 00:23:58.988 "is_configured": true, 00:23:58.988 "data_offset": 0, 00:23:58.988 "data_size": 65536 00:23:58.988 } 00:23:58.988 ] 00:23:58.988 }' 00:23:58.988 06:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:58.988 06:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.557 06:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.557 06:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:59.816 06:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:59.816 06:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:00.075 [2024-08-14 06:55:27.084942] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:00.075 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:00.075 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:00.075 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:00.075 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:00.075 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:00.075 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:00.075 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:00.075 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:00.075 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:00.075 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:00.076 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:00.076 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.335 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:00.335 "name": "Existed_Raid", 00:24:00.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.335 "strip_size_kb": 64, 00:24:00.335 "state": "configuring", 00:24:00.335 "raid_level": "raid5f", 00:24:00.335 "superblock": false, 00:24:00.335 "num_base_bdevs": 4, 00:24:00.335 "num_base_bdevs_discovered": 3, 00:24:00.335 "num_base_bdevs_operational": 4, 00:24:00.335 "base_bdevs_list": [ 00:24:00.335 { 00:24:00.335 "name": null, 00:24:00.335 "uuid": "84a42201-3685-48f7-8850-4e84709ce041", 00:24:00.335 "is_configured": false, 00:24:00.335 "data_offset": 0, 00:24:00.335 "data_size": 65536 00:24:00.335 }, 00:24:00.335 { 00:24:00.335 "name": "BaseBdev2", 00:24:00.335 "uuid": "5fd50144-c301-4bef-8e2c-d8fe221685c8", 00:24:00.335 "is_configured": true, 00:24:00.335 "data_offset": 0, 00:24:00.335 "data_size": 65536 00:24:00.335 }, 00:24:00.335 { 00:24:00.335 "name": "BaseBdev3", 00:24:00.335 "uuid": "212f68d8-8943-4d4f-baf0-b4eebb96784c", 00:24:00.335 "is_configured": true, 00:24:00.335 "data_offset": 0, 00:24:00.335 "data_size": 65536 00:24:00.335 }, 00:24:00.335 { 00:24:00.335 "name": "BaseBdev4", 00:24:00.335 "uuid": "ffd5d996-4eee-423f-8f7a-36497cbe5059", 00:24:00.335 "is_configured": true, 00:24:00.335 "data_offset": 0, 00:24:00.335 "data_size": 65536 00:24:00.335 } 00:24:00.335 ] 00:24:00.335 }' 00:24:00.335 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:00.335 06:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.904 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.904 06:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:00.904 06:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:00.904 06:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:00.904 06:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.162 06:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 84a42201-3685-48f7-8850-4e84709ce041 00:24:01.422 [2024-08-14 06:55:28.605724] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:01.422 [2024-08-14 06:55:28.605865] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:24:01.422 [2024-08-14 06:55:28.605884] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:01.422 [2024-08-14 06:55:28.606209] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:24:01.422 [2024-08-14 06:55:28.606725] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:24:01.422 [2024-08-14 06:55:28.606739] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:24:01.422 [2024-08-14 06:55:28.606965] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:01.422 NewBaseBdev 00:24:01.422 06:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:01.422 06:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:24:01.422 06:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:01.422 06:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:01.422 06:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:01.422 06:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:01.422 06:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:01.693 06:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:01.969 [ 00:24:01.969 { 00:24:01.969 "name": "NewBaseBdev", 00:24:01.969 "aliases": [ 00:24:01.969 "84a42201-3685-48f7-8850-4e84709ce041" 00:24:01.969 ], 00:24:01.969 "product_name": "Malloc disk", 00:24:01.969 "block_size": 512, 00:24:01.969 "num_blocks": 65536, 00:24:01.969 "uuid": "84a42201-3685-48f7-8850-4e84709ce041", 00:24:01.969 "assigned_rate_limits": { 00:24:01.969 "rw_ios_per_sec": 0, 00:24:01.969 "rw_mbytes_per_sec": 0, 00:24:01.969 "r_mbytes_per_sec": 0, 00:24:01.969 "w_mbytes_per_sec": 0 00:24:01.969 }, 00:24:01.969 "claimed": true, 00:24:01.969 "claim_type": "exclusive_write", 00:24:01.969 "zoned": false, 00:24:01.969 "supported_io_types": { 00:24:01.969 "read": true, 00:24:01.969 "write": true, 00:24:01.969 "unmap": true, 00:24:01.969 "flush": true, 00:24:01.969 "reset": true, 00:24:01.969 "nvme_admin": false, 00:24:01.969 "nvme_io": false, 00:24:01.969 "nvme_io_md": false, 00:24:01.969 "write_zeroes": true, 00:24:01.969 "zcopy": true, 00:24:01.969 "get_zone_info": false, 00:24:01.969 "zone_management": false, 00:24:01.969 "zone_append": false, 00:24:01.969 "compare": false, 00:24:01.969 "compare_and_write": false, 00:24:01.969 "abort": true, 00:24:01.969 "seek_hole": false, 00:24:01.969 "seek_data": false, 00:24:01.969 "copy": true, 00:24:01.969 "nvme_iov_md": false 00:24:01.969 }, 00:24:01.969 "memory_domains": [ 00:24:01.969 { 00:24:01.969 "dma_device_id": "system", 00:24:01.969 "dma_device_type": 1 00:24:01.969 }, 00:24:01.969 { 00:24:01.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.969 "dma_device_type": 2 00:24:01.969 } 00:24:01.969 ], 00:24:01.969 "driver_specific": {} 00:24:01.969 } 00:24:01.969 ] 00:24:01.969 06:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:01.969 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:01.969 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:01.969 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:01.969 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:01.969 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:01.969 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:01.969 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:01.969 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:01.969 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:01.970 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:01.970 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.970 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:02.228 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:02.228 "name": "Existed_Raid", 00:24:02.228 "uuid": "0b333a20-ce8b-4ef0-9bd3-7461ad0c7b6a", 00:24:02.228 "strip_size_kb": 64, 00:24:02.228 "state": "online", 00:24:02.228 "raid_level": "raid5f", 00:24:02.228 "superblock": false, 00:24:02.228 "num_base_bdevs": 4, 00:24:02.228 "num_base_bdevs_discovered": 4, 00:24:02.228 "num_base_bdevs_operational": 4, 00:24:02.228 "base_bdevs_list": [ 00:24:02.228 { 00:24:02.228 "name": "NewBaseBdev", 00:24:02.228 "uuid": "84a42201-3685-48f7-8850-4e84709ce041", 00:24:02.228 "is_configured": true, 00:24:02.228 "data_offset": 0, 00:24:02.228 "data_size": 65536 00:24:02.228 }, 00:24:02.228 { 00:24:02.228 "name": "BaseBdev2", 00:24:02.228 "uuid": "5fd50144-c301-4bef-8e2c-d8fe221685c8", 00:24:02.228 "is_configured": true, 00:24:02.228 "data_offset": 0, 00:24:02.228 "data_size": 65536 00:24:02.228 }, 00:24:02.228 { 00:24:02.228 "name": "BaseBdev3", 00:24:02.228 "uuid": "212f68d8-8943-4d4f-baf0-b4eebb96784c", 00:24:02.228 "is_configured": true, 00:24:02.228 "data_offset": 0, 00:24:02.228 "data_size": 65536 00:24:02.228 }, 00:24:02.228 { 00:24:02.228 "name": "BaseBdev4", 00:24:02.228 "uuid": "ffd5d996-4eee-423f-8f7a-36497cbe5059", 00:24:02.228 "is_configured": true, 00:24:02.228 "data_offset": 0, 00:24:02.228 "data_size": 65536 00:24:02.228 } 00:24:02.228 ] 00:24:02.228 }' 00:24:02.229 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:02.229 06:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.798 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:02.799 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:02.799 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:02.799 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:02.799 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:02.799 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:02.799 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:02.799 06:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:03.058 [2024-08-14 06:55:30.055554] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:03.058 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:03.058 "name": "Existed_Raid", 00:24:03.058 "aliases": [ 00:24:03.058 "0b333a20-ce8b-4ef0-9bd3-7461ad0c7b6a" 00:24:03.058 ], 00:24:03.058 "product_name": "Raid Volume", 00:24:03.058 "block_size": 512, 00:24:03.058 "num_blocks": 196608, 00:24:03.058 "uuid": "0b333a20-ce8b-4ef0-9bd3-7461ad0c7b6a", 00:24:03.058 "assigned_rate_limits": { 00:24:03.058 "rw_ios_per_sec": 0, 00:24:03.058 "rw_mbytes_per_sec": 0, 00:24:03.058 "r_mbytes_per_sec": 0, 00:24:03.058 "w_mbytes_per_sec": 0 00:24:03.058 }, 00:24:03.058 "claimed": false, 00:24:03.058 "zoned": false, 00:24:03.058 "supported_io_types": { 00:24:03.058 "read": true, 00:24:03.058 "write": true, 00:24:03.058 "unmap": false, 00:24:03.058 "flush": false, 00:24:03.058 "reset": true, 00:24:03.058 "nvme_admin": false, 00:24:03.058 "nvme_io": false, 00:24:03.058 "nvme_io_md": false, 00:24:03.058 "write_zeroes": true, 00:24:03.058 "zcopy": false, 00:24:03.058 "get_zone_info": false, 00:24:03.058 "zone_management": false, 00:24:03.058 "zone_append": false, 00:24:03.058 "compare": false, 00:24:03.058 "compare_and_write": false, 00:24:03.058 "abort": false, 00:24:03.058 "seek_hole": false, 00:24:03.058 "seek_data": false, 00:24:03.058 "copy": false, 00:24:03.058 "nvme_iov_md": false 00:24:03.058 }, 00:24:03.058 "driver_specific": { 00:24:03.058 "raid": { 00:24:03.058 "uuid": "0b333a20-ce8b-4ef0-9bd3-7461ad0c7b6a", 00:24:03.058 "strip_size_kb": 64, 00:24:03.058 "state": "online", 00:24:03.058 "raid_level": "raid5f", 00:24:03.058 "superblock": false, 00:24:03.058 "num_base_bdevs": 4, 00:24:03.058 "num_base_bdevs_discovered": 4, 00:24:03.058 "num_base_bdevs_operational": 4, 00:24:03.058 "base_bdevs_list": [ 00:24:03.058 { 00:24:03.058 "name": "NewBaseBdev", 00:24:03.058 "uuid": "84a42201-3685-48f7-8850-4e84709ce041", 00:24:03.058 "is_configured": true, 00:24:03.058 "data_offset": 0, 00:24:03.058 "data_size": 65536 00:24:03.058 }, 00:24:03.058 { 00:24:03.058 "name": "BaseBdev2", 00:24:03.058 "uuid": "5fd50144-c301-4bef-8e2c-d8fe221685c8", 00:24:03.058 "is_configured": true, 00:24:03.058 "data_offset": 0, 00:24:03.058 "data_size": 65536 00:24:03.058 }, 00:24:03.058 { 00:24:03.058 "name": "BaseBdev3", 00:24:03.058 "uuid": "212f68d8-8943-4d4f-baf0-b4eebb96784c", 00:24:03.058 "is_configured": true, 00:24:03.058 "data_offset": 0, 00:24:03.058 "data_size": 65536 00:24:03.058 }, 00:24:03.058 { 00:24:03.058 "name": "BaseBdev4", 00:24:03.058 "uuid": "ffd5d996-4eee-423f-8f7a-36497cbe5059", 00:24:03.058 "is_configured": true, 00:24:03.058 "data_offset": 0, 00:24:03.058 "data_size": 65536 00:24:03.058 } 00:24:03.058 ] 00:24:03.058 } 00:24:03.058 } 00:24:03.058 }' 00:24:03.058 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:03.058 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:03.058 BaseBdev2 00:24:03.058 BaseBdev3 00:24:03.058 BaseBdev4' 00:24:03.058 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:03.058 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:03.058 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:03.317 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:03.317 "name": "NewBaseBdev", 00:24:03.317 "aliases": [ 00:24:03.317 "84a42201-3685-48f7-8850-4e84709ce041" 00:24:03.317 ], 00:24:03.317 "product_name": "Malloc disk", 00:24:03.317 "block_size": 512, 00:24:03.317 "num_blocks": 65536, 00:24:03.317 "uuid": "84a42201-3685-48f7-8850-4e84709ce041", 00:24:03.317 "assigned_rate_limits": { 00:24:03.317 "rw_ios_per_sec": 0, 00:24:03.317 "rw_mbytes_per_sec": 0, 00:24:03.317 "r_mbytes_per_sec": 0, 00:24:03.317 "w_mbytes_per_sec": 0 00:24:03.317 }, 00:24:03.317 "claimed": true, 00:24:03.317 "claim_type": "exclusive_write", 00:24:03.317 "zoned": false, 00:24:03.317 "supported_io_types": { 00:24:03.317 "read": true, 00:24:03.317 "write": true, 00:24:03.317 "unmap": true, 00:24:03.317 "flush": true, 00:24:03.317 "reset": true, 00:24:03.317 "nvme_admin": false, 00:24:03.317 "nvme_io": false, 00:24:03.317 "nvme_io_md": false, 00:24:03.317 "write_zeroes": true, 00:24:03.317 "zcopy": true, 00:24:03.317 "get_zone_info": false, 00:24:03.317 "zone_management": false, 00:24:03.317 "zone_append": false, 00:24:03.317 "compare": false, 00:24:03.317 "compare_and_write": false, 00:24:03.317 "abort": true, 00:24:03.317 "seek_hole": false, 00:24:03.317 "seek_data": false, 00:24:03.317 "copy": true, 00:24:03.317 "nvme_iov_md": false 00:24:03.317 }, 00:24:03.317 "memory_domains": [ 00:24:03.317 { 00:24:03.317 "dma_device_id": "system", 00:24:03.317 "dma_device_type": 1 00:24:03.317 }, 00:24:03.317 { 00:24:03.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.317 "dma_device_type": 2 00:24:03.317 } 00:24:03.317 ], 00:24:03.317 "driver_specific": {} 00:24:03.317 }' 00:24:03.317 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:03.317 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:03.317 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:03.317 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:03.317 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:03.317 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:03.317 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:03.577 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:03.578 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:03.578 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:03.578 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:03.578 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:03.578 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:03.578 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:03.578 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:03.837 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:03.837 "name": "BaseBdev2", 00:24:03.837 "aliases": [ 00:24:03.837 "5fd50144-c301-4bef-8e2c-d8fe221685c8" 00:24:03.837 ], 00:24:03.837 "product_name": "Malloc disk", 00:24:03.837 "block_size": 512, 00:24:03.837 "num_blocks": 65536, 00:24:03.837 "uuid": "5fd50144-c301-4bef-8e2c-d8fe221685c8", 00:24:03.837 "assigned_rate_limits": { 00:24:03.837 "rw_ios_per_sec": 0, 00:24:03.837 "rw_mbytes_per_sec": 0, 00:24:03.837 "r_mbytes_per_sec": 0, 00:24:03.837 "w_mbytes_per_sec": 0 00:24:03.837 }, 00:24:03.837 "claimed": true, 00:24:03.837 "claim_type": "exclusive_write", 00:24:03.837 "zoned": false, 00:24:03.837 "supported_io_types": { 00:24:03.837 "read": true, 00:24:03.837 "write": true, 00:24:03.837 "unmap": true, 00:24:03.837 "flush": true, 00:24:03.837 "reset": true, 00:24:03.837 "nvme_admin": false, 00:24:03.837 "nvme_io": false, 00:24:03.837 "nvme_io_md": false, 00:24:03.837 "write_zeroes": true, 00:24:03.837 "zcopy": true, 00:24:03.837 "get_zone_info": false, 00:24:03.837 "zone_management": false, 00:24:03.837 "zone_append": false, 00:24:03.837 "compare": false, 00:24:03.837 "compare_and_write": false, 00:24:03.837 "abort": true, 00:24:03.837 "seek_hole": false, 00:24:03.837 "seek_data": false, 00:24:03.837 "copy": true, 00:24:03.837 "nvme_iov_md": false 00:24:03.837 }, 00:24:03.837 "memory_domains": [ 00:24:03.837 { 00:24:03.837 "dma_device_id": "system", 00:24:03.837 "dma_device_type": 1 00:24:03.837 }, 00:24:03.837 { 00:24:03.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.837 "dma_device_type": 2 00:24:03.837 } 00:24:03.837 ], 00:24:03.837 "driver_specific": {} 00:24:03.837 }' 00:24:03.837 06:55:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:03.837 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:03.837 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:03.837 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:04.096 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:04.096 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:04.096 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:04.096 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:04.096 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:04.096 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:04.096 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:04.096 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:04.096 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:04.096 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:04.096 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:04.356 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:04.356 "name": "BaseBdev3", 00:24:04.356 "aliases": [ 00:24:04.356 "212f68d8-8943-4d4f-baf0-b4eebb96784c" 00:24:04.356 ], 00:24:04.356 "product_name": "Malloc disk", 00:24:04.356 "block_size": 512, 00:24:04.356 "num_blocks": 65536, 00:24:04.356 "uuid": "212f68d8-8943-4d4f-baf0-b4eebb96784c", 00:24:04.356 "assigned_rate_limits": { 00:24:04.356 "rw_ios_per_sec": 0, 00:24:04.356 "rw_mbytes_per_sec": 0, 00:24:04.356 "r_mbytes_per_sec": 0, 00:24:04.356 "w_mbytes_per_sec": 0 00:24:04.356 }, 00:24:04.356 "claimed": true, 00:24:04.356 "claim_type": "exclusive_write", 00:24:04.356 "zoned": false, 00:24:04.356 "supported_io_types": { 00:24:04.356 "read": true, 00:24:04.356 "write": true, 00:24:04.356 "unmap": true, 00:24:04.356 "flush": true, 00:24:04.356 "reset": true, 00:24:04.356 "nvme_admin": false, 00:24:04.356 "nvme_io": false, 00:24:04.356 "nvme_io_md": false, 00:24:04.356 "write_zeroes": true, 00:24:04.356 "zcopy": true, 00:24:04.356 "get_zone_info": false, 00:24:04.356 "zone_management": false, 00:24:04.356 "zone_append": false, 00:24:04.356 "compare": false, 00:24:04.356 "compare_and_write": false, 00:24:04.356 "abort": true, 00:24:04.356 "seek_hole": false, 00:24:04.356 "seek_data": false, 00:24:04.356 "copy": true, 00:24:04.356 "nvme_iov_md": false 00:24:04.356 }, 00:24:04.356 "memory_domains": [ 00:24:04.356 { 00:24:04.356 "dma_device_id": "system", 00:24:04.356 "dma_device_type": 1 00:24:04.356 }, 00:24:04.356 { 00:24:04.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:04.356 "dma_device_type": 2 00:24:04.356 } 00:24:04.356 ], 00:24:04.356 "driver_specific": {} 00:24:04.356 }' 00:24:04.356 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:04.356 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:04.616 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:04.616 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:04.616 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:04.616 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:04.616 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:04.616 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:04.616 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:04.616 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:04.875 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:04.875 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:04.875 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:04.875 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:04.875 06:55:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:05.134 06:55:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:05.134 "name": "BaseBdev4", 00:24:05.134 "aliases": [ 00:24:05.134 "ffd5d996-4eee-423f-8f7a-36497cbe5059" 00:24:05.134 ], 00:24:05.134 "product_name": "Malloc disk", 00:24:05.135 "block_size": 512, 00:24:05.135 "num_blocks": 65536, 00:24:05.135 "uuid": "ffd5d996-4eee-423f-8f7a-36497cbe5059", 00:24:05.135 "assigned_rate_limits": { 00:24:05.135 "rw_ios_per_sec": 0, 00:24:05.135 "rw_mbytes_per_sec": 0, 00:24:05.135 "r_mbytes_per_sec": 0, 00:24:05.135 "w_mbytes_per_sec": 0 00:24:05.135 }, 00:24:05.135 "claimed": true, 00:24:05.135 "claim_type": "exclusive_write", 00:24:05.135 "zoned": false, 00:24:05.135 "supported_io_types": { 00:24:05.135 "read": true, 00:24:05.135 "write": true, 00:24:05.135 "unmap": true, 00:24:05.135 "flush": true, 00:24:05.135 "reset": true, 00:24:05.135 "nvme_admin": false, 00:24:05.135 "nvme_io": false, 00:24:05.135 "nvme_io_md": false, 00:24:05.135 "write_zeroes": true, 00:24:05.135 "zcopy": true, 00:24:05.135 "get_zone_info": false, 00:24:05.135 "zone_management": false, 00:24:05.135 "zone_append": false, 00:24:05.135 "compare": false, 00:24:05.135 "compare_and_write": false, 00:24:05.135 "abort": true, 00:24:05.135 "seek_hole": false, 00:24:05.135 "seek_data": false, 00:24:05.135 "copy": true, 00:24:05.135 "nvme_iov_md": false 00:24:05.135 }, 00:24:05.135 "memory_domains": [ 00:24:05.135 { 00:24:05.135 "dma_device_id": "system", 00:24:05.135 "dma_device_type": 1 00:24:05.135 }, 00:24:05.135 { 00:24:05.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.135 "dma_device_type": 2 00:24:05.135 } 00:24:05.135 ], 00:24:05.135 "driver_specific": {} 00:24:05.135 }' 00:24:05.135 06:55:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:05.135 06:55:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:05.135 06:55:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:05.135 06:55:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:05.135 06:55:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:05.135 06:55:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:05.135 06:55:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:05.394 06:55:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:05.394 06:55:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:05.394 06:55:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:05.394 06:55:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:05.394 06:55:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:05.394 06:55:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:05.654 [2024-08-14 06:55:32.750848] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:05.654 [2024-08-14 06:55:32.750971] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:05.654 [2024-08-14 06:55:32.751112] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:05.654 [2024-08-14 06:55:32.751460] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:05.654 [2024-08-14 06:55:32.751498] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:24:05.654 06:55:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 101823 00:24:05.654 06:55:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 101823 ']' 00:24:05.654 06:55:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # kill -0 101823 00:24:05.654 06:55:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # uname 00:24:05.654 06:55:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:05.654 06:55:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101823 00:24:05.654 killing process with pid 101823 00:24:05.654 06:55:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:05.654 06:55:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:05.654 06:55:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101823' 00:24:05.654 06:55:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@965 -- # kill 101823 00:24:05.654 [2024-08-14 06:55:32.814555] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:05.654 06:55:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # wait 101823 00:24:05.654 [2024-08-14 06:55:32.859064] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:05.914 06:55:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:24:05.914 00:24:05.914 real 0m32.882s 00:24:05.914 user 1m1.375s 00:24:05.914 sys 0m4.768s 00:24:05.914 06:55:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:05.914 ************************************ 00:24:05.914 END TEST raid5f_state_function_test 00:24:05.914 ************************************ 00:24:05.914 06:55:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.914 06:55:33 bdev_raid -- bdev/bdev_raid.sh@966 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:24:05.914 06:55:33 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:05.914 06:55:33 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:05.914 06:55:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:05.914 ************************************ 00:24:05.914 START TEST raid5f_state_function_test_sb 00:24:05.914 ************************************ 00:24:05.914 06:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 4 true 00:24:05.914 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:24:05.914 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:05.914 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:24:05.914 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=102867 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:06.174 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 102867' 00:24:06.175 Process raid pid: 102867 00:24:06.175 06:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 102867 /var/tmp/spdk-raid.sock 00:24:06.175 06:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 102867 ']' 00:24:06.175 06:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:06.175 06:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:06.175 06:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:06.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:06.175 06:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:06.175 06:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.175 [2024-08-14 06:55:33.262784] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:24:06.175 [2024-08-14 06:55:33.263021] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.175 [2024-08-14 06:55:33.413588] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.433 [2024-08-14 06:55:33.476323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.434 [2024-08-14 06:55:33.520851] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:06.434 [2024-08-14 06:55:33.520978] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:07.002 06:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:07.002 06:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:24:07.002 06:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:07.262 [2024-08-14 06:55:34.348572] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:07.262 [2024-08-14 06:55:34.348632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:07.262 [2024-08-14 06:55:34.348645] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:07.262 [2024-08-14 06:55:34.348654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:07.262 [2024-08-14 06:55:34.348664] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:07.262 [2024-08-14 06:55:34.348671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:07.262 [2024-08-14 06:55:34.348681] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:07.262 [2024-08-14 06:55:34.348688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:07.262 06:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:07.262 06:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:07.262 06:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:07.262 06:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:07.262 06:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:07.262 06:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:07.262 06:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:07.262 06:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:07.262 06:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:07.262 06:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:07.262 06:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.262 06:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:07.521 06:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:07.521 "name": "Existed_Raid", 00:24:07.521 "uuid": "9d514cfa-1735-44a5-9aa1-3917d4c498f0", 00:24:07.521 "strip_size_kb": 64, 00:24:07.521 "state": "configuring", 00:24:07.521 "raid_level": "raid5f", 00:24:07.521 "superblock": true, 00:24:07.521 "num_base_bdevs": 4, 00:24:07.521 "num_base_bdevs_discovered": 0, 00:24:07.521 "num_base_bdevs_operational": 4, 00:24:07.521 "base_bdevs_list": [ 00:24:07.521 { 00:24:07.521 "name": "BaseBdev1", 00:24:07.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.521 "is_configured": false, 00:24:07.521 "data_offset": 0, 00:24:07.521 "data_size": 0 00:24:07.521 }, 00:24:07.521 { 00:24:07.521 "name": "BaseBdev2", 00:24:07.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.521 "is_configured": false, 00:24:07.521 "data_offset": 0, 00:24:07.521 "data_size": 0 00:24:07.521 }, 00:24:07.521 { 00:24:07.521 "name": "BaseBdev3", 00:24:07.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.521 "is_configured": false, 00:24:07.521 "data_offset": 0, 00:24:07.521 "data_size": 0 00:24:07.521 }, 00:24:07.521 { 00:24:07.521 "name": "BaseBdev4", 00:24:07.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.521 "is_configured": false, 00:24:07.521 "data_offset": 0, 00:24:07.521 "data_size": 0 00:24:07.521 } 00:24:07.521 ] 00:24:07.521 }' 00:24:07.521 06:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:07.521 06:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.090 06:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:08.350 [2024-08-14 06:55:35.434570] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:08.350 [2024-08-14 06:55:35.434701] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:24:08.350 06:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:08.609 [2024-08-14 06:55:35.650255] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:08.609 [2024-08-14 06:55:35.650393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:08.609 [2024-08-14 06:55:35.650429] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:08.609 [2024-08-14 06:55:35.650454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:08.609 [2024-08-14 06:55:35.650476] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:08.609 [2024-08-14 06:55:35.650497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:08.609 [2024-08-14 06:55:35.650520] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:08.609 [2024-08-14 06:55:35.650541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:08.609 06:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:08.869 [2024-08-14 06:55:35.879031] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:08.869 BaseBdev1 00:24:08.869 06:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:08.869 06:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:24:08.869 06:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:08.869 06:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:08.869 06:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:08.869 06:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:08.869 06:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:08.869 06:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:09.129 [ 00:24:09.129 { 00:24:09.129 "name": "BaseBdev1", 00:24:09.129 "aliases": [ 00:24:09.129 "74b7687d-e2d2-4cb8-bb67-1743e58e1182" 00:24:09.129 ], 00:24:09.129 "product_name": "Malloc disk", 00:24:09.129 "block_size": 512, 00:24:09.129 "num_blocks": 65536, 00:24:09.129 "uuid": "74b7687d-e2d2-4cb8-bb67-1743e58e1182", 00:24:09.129 "assigned_rate_limits": { 00:24:09.129 "rw_ios_per_sec": 0, 00:24:09.129 "rw_mbytes_per_sec": 0, 00:24:09.129 "r_mbytes_per_sec": 0, 00:24:09.129 "w_mbytes_per_sec": 0 00:24:09.129 }, 00:24:09.129 "claimed": true, 00:24:09.129 "claim_type": "exclusive_write", 00:24:09.129 "zoned": false, 00:24:09.129 "supported_io_types": { 00:24:09.129 "read": true, 00:24:09.129 "write": true, 00:24:09.129 "unmap": true, 00:24:09.129 "flush": true, 00:24:09.129 "reset": true, 00:24:09.129 "nvme_admin": false, 00:24:09.129 "nvme_io": false, 00:24:09.129 "nvme_io_md": false, 00:24:09.129 "write_zeroes": true, 00:24:09.129 "zcopy": true, 00:24:09.129 "get_zone_info": false, 00:24:09.129 "zone_management": false, 00:24:09.129 "zone_append": false, 00:24:09.129 "compare": false, 00:24:09.129 "compare_and_write": false, 00:24:09.129 "abort": true, 00:24:09.129 "seek_hole": false, 00:24:09.129 "seek_data": false, 00:24:09.129 "copy": true, 00:24:09.129 "nvme_iov_md": false 00:24:09.129 }, 00:24:09.129 "memory_domains": [ 00:24:09.129 { 00:24:09.129 "dma_device_id": "system", 00:24:09.129 "dma_device_type": 1 00:24:09.129 }, 00:24:09.129 { 00:24:09.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.129 "dma_device_type": 2 00:24:09.129 } 00:24:09.129 ], 00:24:09.129 "driver_specific": {} 00:24:09.129 } 00:24:09.129 ] 00:24:09.129 06:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:09.129 06:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:09.129 06:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:09.129 06:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:09.129 06:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:09.129 06:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:09.129 06:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:09.129 06:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:09.129 06:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:09.129 06:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:09.129 06:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:09.129 06:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.129 06:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:09.388 06:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.388 "name": "Existed_Raid", 00:24:09.388 "uuid": "bf0f3fd1-ecb3-4de9-bfeb-64fb38aedd68", 00:24:09.388 "strip_size_kb": 64, 00:24:09.388 "state": "configuring", 00:24:09.388 "raid_level": "raid5f", 00:24:09.388 "superblock": true, 00:24:09.388 "num_base_bdevs": 4, 00:24:09.388 "num_base_bdevs_discovered": 1, 00:24:09.388 "num_base_bdevs_operational": 4, 00:24:09.388 "base_bdevs_list": [ 00:24:09.388 { 00:24:09.388 "name": "BaseBdev1", 00:24:09.388 "uuid": "74b7687d-e2d2-4cb8-bb67-1743e58e1182", 00:24:09.388 "is_configured": true, 00:24:09.388 "data_offset": 2048, 00:24:09.388 "data_size": 63488 00:24:09.388 }, 00:24:09.388 { 00:24:09.388 "name": "BaseBdev2", 00:24:09.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.388 "is_configured": false, 00:24:09.388 "data_offset": 0, 00:24:09.388 "data_size": 0 00:24:09.388 }, 00:24:09.388 { 00:24:09.388 "name": "BaseBdev3", 00:24:09.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.388 "is_configured": false, 00:24:09.388 "data_offset": 0, 00:24:09.388 "data_size": 0 00:24:09.388 }, 00:24:09.388 { 00:24:09.388 "name": "BaseBdev4", 00:24:09.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.388 "is_configured": false, 00:24:09.388 "data_offset": 0, 00:24:09.388 "data_size": 0 00:24:09.388 } 00:24:09.388 ] 00:24:09.388 }' 00:24:09.388 06:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.388 06:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:09.955 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:10.214 [2024-08-14 06:55:37.404544] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:10.214 [2024-08-14 06:55:37.404630] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:24:10.214 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:10.474 [2024-08-14 06:55:37.648242] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:10.474 [2024-08-14 06:55:37.650277] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:10.474 [2024-08-14 06:55:37.650329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:10.474 [2024-08-14 06:55:37.650346] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:10.474 [2024-08-14 06:55:37.650355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:10.474 [2024-08-14 06:55:37.650364] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:10.474 [2024-08-14 06:55:37.650372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:10.474 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:10.474 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:10.474 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:10.474 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:10.474 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:10.474 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:10.474 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:10.474 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:10.474 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:10.474 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:10.474 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:10.474 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:10.474 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.474 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:10.733 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:10.733 "name": "Existed_Raid", 00:24:10.733 "uuid": "3239eab2-e431-4c1c-8e07-6ff6fceebaeb", 00:24:10.733 "strip_size_kb": 64, 00:24:10.733 "state": "configuring", 00:24:10.733 "raid_level": "raid5f", 00:24:10.733 "superblock": true, 00:24:10.733 "num_base_bdevs": 4, 00:24:10.733 "num_base_bdevs_discovered": 1, 00:24:10.733 "num_base_bdevs_operational": 4, 00:24:10.733 "base_bdevs_list": [ 00:24:10.733 { 00:24:10.733 "name": "BaseBdev1", 00:24:10.733 "uuid": "74b7687d-e2d2-4cb8-bb67-1743e58e1182", 00:24:10.733 "is_configured": true, 00:24:10.733 "data_offset": 2048, 00:24:10.733 "data_size": 63488 00:24:10.733 }, 00:24:10.733 { 00:24:10.733 "name": "BaseBdev2", 00:24:10.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.733 "is_configured": false, 00:24:10.733 "data_offset": 0, 00:24:10.733 "data_size": 0 00:24:10.734 }, 00:24:10.734 { 00:24:10.734 "name": "BaseBdev3", 00:24:10.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.734 "is_configured": false, 00:24:10.734 "data_offset": 0, 00:24:10.734 "data_size": 0 00:24:10.734 }, 00:24:10.734 { 00:24:10.734 "name": "BaseBdev4", 00:24:10.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.734 "is_configured": false, 00:24:10.734 "data_offset": 0, 00:24:10.734 "data_size": 0 00:24:10.734 } 00:24:10.734 ] 00:24:10.734 }' 00:24:10.734 06:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:10.734 06:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.300 06:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:11.558 [2024-08-14 06:55:38.756001] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:11.558 BaseBdev2 00:24:11.558 06:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:11.558 06:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:11.558 06:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:11.558 06:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:11.558 06:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:11.559 06:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:11.559 06:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:11.857 06:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:12.117 [ 00:24:12.117 { 00:24:12.117 "name": "BaseBdev2", 00:24:12.117 "aliases": [ 00:24:12.117 "6b9b80ca-7ed0-43eb-b1e6-4856b02fa0c1" 00:24:12.117 ], 00:24:12.117 "product_name": "Malloc disk", 00:24:12.117 "block_size": 512, 00:24:12.117 "num_blocks": 65536, 00:24:12.117 "uuid": "6b9b80ca-7ed0-43eb-b1e6-4856b02fa0c1", 00:24:12.117 "assigned_rate_limits": { 00:24:12.117 "rw_ios_per_sec": 0, 00:24:12.117 "rw_mbytes_per_sec": 0, 00:24:12.117 "r_mbytes_per_sec": 0, 00:24:12.117 "w_mbytes_per_sec": 0 00:24:12.117 }, 00:24:12.117 "claimed": true, 00:24:12.117 "claim_type": "exclusive_write", 00:24:12.117 "zoned": false, 00:24:12.117 "supported_io_types": { 00:24:12.117 "read": true, 00:24:12.117 "write": true, 00:24:12.117 "unmap": true, 00:24:12.117 "flush": true, 00:24:12.117 "reset": true, 00:24:12.117 "nvme_admin": false, 00:24:12.117 "nvme_io": false, 00:24:12.117 "nvme_io_md": false, 00:24:12.117 "write_zeroes": true, 00:24:12.117 "zcopy": true, 00:24:12.117 "get_zone_info": false, 00:24:12.117 "zone_management": false, 00:24:12.117 "zone_append": false, 00:24:12.117 "compare": false, 00:24:12.117 "compare_and_write": false, 00:24:12.117 "abort": true, 00:24:12.117 "seek_hole": false, 00:24:12.117 "seek_data": false, 00:24:12.117 "copy": true, 00:24:12.117 "nvme_iov_md": false 00:24:12.117 }, 00:24:12.117 "memory_domains": [ 00:24:12.117 { 00:24:12.117 "dma_device_id": "system", 00:24:12.117 "dma_device_type": 1 00:24:12.117 }, 00:24:12.117 { 00:24:12.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:12.117 "dma_device_type": 2 00:24:12.117 } 00:24:12.117 ], 00:24:12.117 "driver_specific": {} 00:24:12.117 } 00:24:12.117 ] 00:24:12.117 06:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:12.117 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:12.117 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:12.117 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:12.117 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:12.117 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:12.117 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:12.117 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:12.117 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:12.117 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:12.117 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:12.117 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:12.117 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:12.117 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:12.117 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.377 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:12.377 "name": "Existed_Raid", 00:24:12.377 "uuid": "3239eab2-e431-4c1c-8e07-6ff6fceebaeb", 00:24:12.377 "strip_size_kb": 64, 00:24:12.377 "state": "configuring", 00:24:12.377 "raid_level": "raid5f", 00:24:12.377 "superblock": true, 00:24:12.377 "num_base_bdevs": 4, 00:24:12.377 "num_base_bdevs_discovered": 2, 00:24:12.377 "num_base_bdevs_operational": 4, 00:24:12.377 "base_bdevs_list": [ 00:24:12.377 { 00:24:12.377 "name": "BaseBdev1", 00:24:12.377 "uuid": "74b7687d-e2d2-4cb8-bb67-1743e58e1182", 00:24:12.377 "is_configured": true, 00:24:12.377 "data_offset": 2048, 00:24:12.377 "data_size": 63488 00:24:12.377 }, 00:24:12.377 { 00:24:12.377 "name": "BaseBdev2", 00:24:12.377 "uuid": "6b9b80ca-7ed0-43eb-b1e6-4856b02fa0c1", 00:24:12.377 "is_configured": true, 00:24:12.377 "data_offset": 2048, 00:24:12.377 "data_size": 63488 00:24:12.377 }, 00:24:12.377 { 00:24:12.377 "name": "BaseBdev3", 00:24:12.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.377 "is_configured": false, 00:24:12.377 "data_offset": 0, 00:24:12.377 "data_size": 0 00:24:12.377 }, 00:24:12.377 { 00:24:12.377 "name": "BaseBdev4", 00:24:12.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.377 "is_configured": false, 00:24:12.377 "data_offset": 0, 00:24:12.377 "data_size": 0 00:24:12.377 } 00:24:12.377 ] 00:24:12.377 }' 00:24:12.377 06:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:12.377 06:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.944 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:13.202 [2024-08-14 06:55:40.368723] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:13.202 BaseBdev3 00:24:13.202 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:13.202 06:55:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:24:13.202 06:55:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:13.202 06:55:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:13.202 06:55:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:13.202 06:55:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:13.202 06:55:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:13.461 06:55:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:13.719 [ 00:24:13.720 { 00:24:13.720 "name": "BaseBdev3", 00:24:13.720 "aliases": [ 00:24:13.720 "68a87f4f-f85b-41d5-900d-aee62025873a" 00:24:13.720 ], 00:24:13.720 "product_name": "Malloc disk", 00:24:13.720 "block_size": 512, 00:24:13.720 "num_blocks": 65536, 00:24:13.720 "uuid": "68a87f4f-f85b-41d5-900d-aee62025873a", 00:24:13.720 "assigned_rate_limits": { 00:24:13.720 "rw_ios_per_sec": 0, 00:24:13.720 "rw_mbytes_per_sec": 0, 00:24:13.720 "r_mbytes_per_sec": 0, 00:24:13.720 "w_mbytes_per_sec": 0 00:24:13.720 }, 00:24:13.720 "claimed": true, 00:24:13.720 "claim_type": "exclusive_write", 00:24:13.720 "zoned": false, 00:24:13.720 "supported_io_types": { 00:24:13.720 "read": true, 00:24:13.720 "write": true, 00:24:13.720 "unmap": true, 00:24:13.720 "flush": true, 00:24:13.720 "reset": true, 00:24:13.720 "nvme_admin": false, 00:24:13.720 "nvme_io": false, 00:24:13.720 "nvme_io_md": false, 00:24:13.720 "write_zeroes": true, 00:24:13.720 "zcopy": true, 00:24:13.720 "get_zone_info": false, 00:24:13.720 "zone_management": false, 00:24:13.720 "zone_append": false, 00:24:13.720 "compare": false, 00:24:13.720 "compare_and_write": false, 00:24:13.720 "abort": true, 00:24:13.720 "seek_hole": false, 00:24:13.720 "seek_data": false, 00:24:13.720 "copy": true, 00:24:13.720 "nvme_iov_md": false 00:24:13.720 }, 00:24:13.720 "memory_domains": [ 00:24:13.720 { 00:24:13.720 "dma_device_id": "system", 00:24:13.720 "dma_device_type": 1 00:24:13.720 }, 00:24:13.720 { 00:24:13.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:13.720 "dma_device_type": 2 00:24:13.720 } 00:24:13.720 ], 00:24:13.720 "driver_specific": {} 00:24:13.720 } 00:24:13.720 ] 00:24:13.720 06:55:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:13.720 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:13.720 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:13.720 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:13.720 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:13.720 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:13.720 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:13.720 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:13.720 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:13.720 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:13.720 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:13.720 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:13.720 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:13.720 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.720 06:55:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:13.979 06:55:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:13.979 "name": "Existed_Raid", 00:24:13.979 "uuid": "3239eab2-e431-4c1c-8e07-6ff6fceebaeb", 00:24:13.979 "strip_size_kb": 64, 00:24:13.979 "state": "configuring", 00:24:13.979 "raid_level": "raid5f", 00:24:13.979 "superblock": true, 00:24:13.979 "num_base_bdevs": 4, 00:24:13.979 "num_base_bdevs_discovered": 3, 00:24:13.979 "num_base_bdevs_operational": 4, 00:24:13.979 "base_bdevs_list": [ 00:24:13.979 { 00:24:13.979 "name": "BaseBdev1", 00:24:13.979 "uuid": "74b7687d-e2d2-4cb8-bb67-1743e58e1182", 00:24:13.979 "is_configured": true, 00:24:13.979 "data_offset": 2048, 00:24:13.979 "data_size": 63488 00:24:13.979 }, 00:24:13.979 { 00:24:13.979 "name": "BaseBdev2", 00:24:13.979 "uuid": "6b9b80ca-7ed0-43eb-b1e6-4856b02fa0c1", 00:24:13.979 "is_configured": true, 00:24:13.979 "data_offset": 2048, 00:24:13.979 "data_size": 63488 00:24:13.979 }, 00:24:13.979 { 00:24:13.979 "name": "BaseBdev3", 00:24:13.979 "uuid": "68a87f4f-f85b-41d5-900d-aee62025873a", 00:24:13.979 "is_configured": true, 00:24:13.979 "data_offset": 2048, 00:24:13.979 "data_size": 63488 00:24:13.979 }, 00:24:13.979 { 00:24:13.979 "name": "BaseBdev4", 00:24:13.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.979 "is_configured": false, 00:24:13.979 "data_offset": 0, 00:24:13.979 "data_size": 0 00:24:13.979 } 00:24:13.979 ] 00:24:13.979 }' 00:24:13.979 06:55:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:13.979 06:55:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.546 06:55:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:14.804 [2024-08-14 06:55:41.933515] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:14.804 [2024-08-14 06:55:41.933784] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:24:14.804 [2024-08-14 06:55:41.933807] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:14.804 [2024-08-14 06:55:41.934156] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:24:14.804 [2024-08-14 06:55:41.934736] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:24:14.804 [2024-08-14 06:55:41.934767] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:24:14.804 [2024-08-14 06:55:41.934913] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:14.804 BaseBdev4 00:24:14.804 06:55:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:14.804 06:55:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:24:14.804 06:55:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:14.804 06:55:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:14.804 06:55:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:14.804 06:55:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:14.804 06:55:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:15.064 06:55:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:15.322 [ 00:24:15.322 { 00:24:15.322 "name": "BaseBdev4", 00:24:15.322 "aliases": [ 00:24:15.322 "ce5560f0-d896-4c51-a6d4-495226994de4" 00:24:15.322 ], 00:24:15.322 "product_name": "Malloc disk", 00:24:15.322 "block_size": 512, 00:24:15.322 "num_blocks": 65536, 00:24:15.322 "uuid": "ce5560f0-d896-4c51-a6d4-495226994de4", 00:24:15.322 "assigned_rate_limits": { 00:24:15.322 "rw_ios_per_sec": 0, 00:24:15.322 "rw_mbytes_per_sec": 0, 00:24:15.322 "r_mbytes_per_sec": 0, 00:24:15.322 "w_mbytes_per_sec": 0 00:24:15.322 }, 00:24:15.322 "claimed": true, 00:24:15.322 "claim_type": "exclusive_write", 00:24:15.322 "zoned": false, 00:24:15.322 "supported_io_types": { 00:24:15.322 "read": true, 00:24:15.322 "write": true, 00:24:15.322 "unmap": true, 00:24:15.322 "flush": true, 00:24:15.322 "reset": true, 00:24:15.322 "nvme_admin": false, 00:24:15.322 "nvme_io": false, 00:24:15.322 "nvme_io_md": false, 00:24:15.322 "write_zeroes": true, 00:24:15.322 "zcopy": true, 00:24:15.322 "get_zone_info": false, 00:24:15.322 "zone_management": false, 00:24:15.322 "zone_append": false, 00:24:15.322 "compare": false, 00:24:15.322 "compare_and_write": false, 00:24:15.322 "abort": true, 00:24:15.322 "seek_hole": false, 00:24:15.322 "seek_data": false, 00:24:15.322 "copy": true, 00:24:15.322 "nvme_iov_md": false 00:24:15.322 }, 00:24:15.322 "memory_domains": [ 00:24:15.322 { 00:24:15.322 "dma_device_id": "system", 00:24:15.322 "dma_device_type": 1 00:24:15.322 }, 00:24:15.322 { 00:24:15.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.322 "dma_device_type": 2 00:24:15.322 } 00:24:15.322 ], 00:24:15.322 "driver_specific": {} 00:24:15.322 } 00:24:15.322 ] 00:24:15.322 06:55:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:15.322 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:15.322 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:15.322 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:15.322 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:15.322 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:15.322 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:15.322 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:15.322 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:15.322 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:15.322 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:15.322 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:15.322 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:15.322 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.322 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:15.603 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:15.603 "name": "Existed_Raid", 00:24:15.603 "uuid": "3239eab2-e431-4c1c-8e07-6ff6fceebaeb", 00:24:15.603 "strip_size_kb": 64, 00:24:15.603 "state": "online", 00:24:15.603 "raid_level": "raid5f", 00:24:15.603 "superblock": true, 00:24:15.603 "num_base_bdevs": 4, 00:24:15.603 "num_base_bdevs_discovered": 4, 00:24:15.603 "num_base_bdevs_operational": 4, 00:24:15.603 "base_bdevs_list": [ 00:24:15.603 { 00:24:15.603 "name": "BaseBdev1", 00:24:15.603 "uuid": "74b7687d-e2d2-4cb8-bb67-1743e58e1182", 00:24:15.603 "is_configured": true, 00:24:15.603 "data_offset": 2048, 00:24:15.603 "data_size": 63488 00:24:15.603 }, 00:24:15.603 { 00:24:15.603 "name": "BaseBdev2", 00:24:15.603 "uuid": "6b9b80ca-7ed0-43eb-b1e6-4856b02fa0c1", 00:24:15.603 "is_configured": true, 00:24:15.603 "data_offset": 2048, 00:24:15.603 "data_size": 63488 00:24:15.603 }, 00:24:15.603 { 00:24:15.603 "name": "BaseBdev3", 00:24:15.603 "uuid": "68a87f4f-f85b-41d5-900d-aee62025873a", 00:24:15.603 "is_configured": true, 00:24:15.603 "data_offset": 2048, 00:24:15.603 "data_size": 63488 00:24:15.603 }, 00:24:15.603 { 00:24:15.603 "name": "BaseBdev4", 00:24:15.603 "uuid": "ce5560f0-d896-4c51-a6d4-495226994de4", 00:24:15.603 "is_configured": true, 00:24:15.603 "data_offset": 2048, 00:24:15.603 "data_size": 63488 00:24:15.603 } 00:24:15.603 ] 00:24:15.603 }' 00:24:15.603 06:55:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:15.603 06:55:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.216 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:16.216 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:16.216 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:16.216 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:16.216 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:16.216 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:16.216 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:16.216 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:16.474 [2024-08-14 06:55:43.503218] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:16.474 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:16.474 "name": "Existed_Raid", 00:24:16.474 "aliases": [ 00:24:16.474 "3239eab2-e431-4c1c-8e07-6ff6fceebaeb" 00:24:16.474 ], 00:24:16.474 "product_name": "Raid Volume", 00:24:16.474 "block_size": 512, 00:24:16.474 "num_blocks": 190464, 00:24:16.474 "uuid": "3239eab2-e431-4c1c-8e07-6ff6fceebaeb", 00:24:16.474 "assigned_rate_limits": { 00:24:16.474 "rw_ios_per_sec": 0, 00:24:16.474 "rw_mbytes_per_sec": 0, 00:24:16.474 "r_mbytes_per_sec": 0, 00:24:16.474 "w_mbytes_per_sec": 0 00:24:16.474 }, 00:24:16.474 "claimed": false, 00:24:16.474 "zoned": false, 00:24:16.474 "supported_io_types": { 00:24:16.474 "read": true, 00:24:16.474 "write": true, 00:24:16.474 "unmap": false, 00:24:16.474 "flush": false, 00:24:16.474 "reset": true, 00:24:16.474 "nvme_admin": false, 00:24:16.474 "nvme_io": false, 00:24:16.474 "nvme_io_md": false, 00:24:16.474 "write_zeroes": true, 00:24:16.474 "zcopy": false, 00:24:16.474 "get_zone_info": false, 00:24:16.474 "zone_management": false, 00:24:16.474 "zone_append": false, 00:24:16.474 "compare": false, 00:24:16.474 "compare_and_write": false, 00:24:16.474 "abort": false, 00:24:16.474 "seek_hole": false, 00:24:16.474 "seek_data": false, 00:24:16.474 "copy": false, 00:24:16.474 "nvme_iov_md": false 00:24:16.474 }, 00:24:16.474 "driver_specific": { 00:24:16.474 "raid": { 00:24:16.474 "uuid": "3239eab2-e431-4c1c-8e07-6ff6fceebaeb", 00:24:16.474 "strip_size_kb": 64, 00:24:16.474 "state": "online", 00:24:16.474 "raid_level": "raid5f", 00:24:16.474 "superblock": true, 00:24:16.474 "num_base_bdevs": 4, 00:24:16.474 "num_base_bdevs_discovered": 4, 00:24:16.474 "num_base_bdevs_operational": 4, 00:24:16.474 "base_bdevs_list": [ 00:24:16.474 { 00:24:16.474 "name": "BaseBdev1", 00:24:16.474 "uuid": "74b7687d-e2d2-4cb8-bb67-1743e58e1182", 00:24:16.474 "is_configured": true, 00:24:16.474 "data_offset": 2048, 00:24:16.474 "data_size": 63488 00:24:16.474 }, 00:24:16.474 { 00:24:16.474 "name": "BaseBdev2", 00:24:16.474 "uuid": "6b9b80ca-7ed0-43eb-b1e6-4856b02fa0c1", 00:24:16.474 "is_configured": true, 00:24:16.474 "data_offset": 2048, 00:24:16.474 "data_size": 63488 00:24:16.474 }, 00:24:16.474 { 00:24:16.474 "name": "BaseBdev3", 00:24:16.474 "uuid": "68a87f4f-f85b-41d5-900d-aee62025873a", 00:24:16.474 "is_configured": true, 00:24:16.474 "data_offset": 2048, 00:24:16.474 "data_size": 63488 00:24:16.474 }, 00:24:16.474 { 00:24:16.474 "name": "BaseBdev4", 00:24:16.474 "uuid": "ce5560f0-d896-4c51-a6d4-495226994de4", 00:24:16.474 "is_configured": true, 00:24:16.474 "data_offset": 2048, 00:24:16.474 "data_size": 63488 00:24:16.474 } 00:24:16.474 ] 00:24:16.474 } 00:24:16.474 } 00:24:16.474 }' 00:24:16.474 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:16.474 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:16.474 BaseBdev2 00:24:16.474 BaseBdev3 00:24:16.474 BaseBdev4' 00:24:16.474 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:16.474 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:16.474 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:16.732 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:16.732 "name": "BaseBdev1", 00:24:16.732 "aliases": [ 00:24:16.732 "74b7687d-e2d2-4cb8-bb67-1743e58e1182" 00:24:16.732 ], 00:24:16.732 "product_name": "Malloc disk", 00:24:16.732 "block_size": 512, 00:24:16.732 "num_blocks": 65536, 00:24:16.732 "uuid": "74b7687d-e2d2-4cb8-bb67-1743e58e1182", 00:24:16.732 "assigned_rate_limits": { 00:24:16.732 "rw_ios_per_sec": 0, 00:24:16.732 "rw_mbytes_per_sec": 0, 00:24:16.732 "r_mbytes_per_sec": 0, 00:24:16.732 "w_mbytes_per_sec": 0 00:24:16.732 }, 00:24:16.732 "claimed": true, 00:24:16.732 "claim_type": "exclusive_write", 00:24:16.732 "zoned": false, 00:24:16.732 "supported_io_types": { 00:24:16.732 "read": true, 00:24:16.732 "write": true, 00:24:16.732 "unmap": true, 00:24:16.732 "flush": true, 00:24:16.732 "reset": true, 00:24:16.732 "nvme_admin": false, 00:24:16.732 "nvme_io": false, 00:24:16.732 "nvme_io_md": false, 00:24:16.732 "write_zeroes": true, 00:24:16.732 "zcopy": true, 00:24:16.732 "get_zone_info": false, 00:24:16.732 "zone_management": false, 00:24:16.732 "zone_append": false, 00:24:16.732 "compare": false, 00:24:16.732 "compare_and_write": false, 00:24:16.732 "abort": true, 00:24:16.732 "seek_hole": false, 00:24:16.732 "seek_data": false, 00:24:16.732 "copy": true, 00:24:16.732 "nvme_iov_md": false 00:24:16.732 }, 00:24:16.732 "memory_domains": [ 00:24:16.732 { 00:24:16.732 "dma_device_id": "system", 00:24:16.732 "dma_device_type": 1 00:24:16.732 }, 00:24:16.732 { 00:24:16.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:16.732 "dma_device_type": 2 00:24:16.732 } 00:24:16.732 ], 00:24:16.732 "driver_specific": {} 00:24:16.732 }' 00:24:16.732 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:16.732 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:16.732 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:16.732 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:16.732 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:16.732 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:16.732 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:16.992 06:55:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:16.992 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:16.992 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:16.992 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:16.992 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:16.992 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:16.992 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:16.992 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:17.252 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:17.252 "name": "BaseBdev2", 00:24:17.252 "aliases": [ 00:24:17.252 "6b9b80ca-7ed0-43eb-b1e6-4856b02fa0c1" 00:24:17.252 ], 00:24:17.252 "product_name": "Malloc disk", 00:24:17.252 "block_size": 512, 00:24:17.252 "num_blocks": 65536, 00:24:17.252 "uuid": "6b9b80ca-7ed0-43eb-b1e6-4856b02fa0c1", 00:24:17.252 "assigned_rate_limits": { 00:24:17.252 "rw_ios_per_sec": 0, 00:24:17.252 "rw_mbytes_per_sec": 0, 00:24:17.252 "r_mbytes_per_sec": 0, 00:24:17.252 "w_mbytes_per_sec": 0 00:24:17.252 }, 00:24:17.252 "claimed": true, 00:24:17.252 "claim_type": "exclusive_write", 00:24:17.252 "zoned": false, 00:24:17.252 "supported_io_types": { 00:24:17.252 "read": true, 00:24:17.252 "write": true, 00:24:17.252 "unmap": true, 00:24:17.252 "flush": true, 00:24:17.252 "reset": true, 00:24:17.252 "nvme_admin": false, 00:24:17.252 "nvme_io": false, 00:24:17.252 "nvme_io_md": false, 00:24:17.252 "write_zeroes": true, 00:24:17.252 "zcopy": true, 00:24:17.252 "get_zone_info": false, 00:24:17.252 "zone_management": false, 00:24:17.252 "zone_append": false, 00:24:17.252 "compare": false, 00:24:17.252 "compare_and_write": false, 00:24:17.252 "abort": true, 00:24:17.252 "seek_hole": false, 00:24:17.252 "seek_data": false, 00:24:17.252 "copy": true, 00:24:17.252 "nvme_iov_md": false 00:24:17.252 }, 00:24:17.252 "memory_domains": [ 00:24:17.252 { 00:24:17.252 "dma_device_id": "system", 00:24:17.252 "dma_device_type": 1 00:24:17.252 }, 00:24:17.252 { 00:24:17.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:17.252 "dma_device_type": 2 00:24:17.252 } 00:24:17.252 ], 00:24:17.252 "driver_specific": {} 00:24:17.252 }' 00:24:17.252 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:17.252 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:17.252 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:17.252 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:17.511 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:17.511 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:17.511 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:17.511 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:17.511 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:17.511 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:17.511 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:17.512 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:17.512 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:17.512 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:17.512 06:55:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:17.771 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:17.771 "name": "BaseBdev3", 00:24:17.771 "aliases": [ 00:24:17.771 "68a87f4f-f85b-41d5-900d-aee62025873a" 00:24:17.771 ], 00:24:17.771 "product_name": "Malloc disk", 00:24:17.771 "block_size": 512, 00:24:17.771 "num_blocks": 65536, 00:24:17.771 "uuid": "68a87f4f-f85b-41d5-900d-aee62025873a", 00:24:17.771 "assigned_rate_limits": { 00:24:17.771 "rw_ios_per_sec": 0, 00:24:17.771 "rw_mbytes_per_sec": 0, 00:24:17.771 "r_mbytes_per_sec": 0, 00:24:17.771 "w_mbytes_per_sec": 0 00:24:17.771 }, 00:24:17.771 "claimed": true, 00:24:17.771 "claim_type": "exclusive_write", 00:24:17.771 "zoned": false, 00:24:17.771 "supported_io_types": { 00:24:17.771 "read": true, 00:24:17.771 "write": true, 00:24:17.771 "unmap": true, 00:24:17.771 "flush": true, 00:24:17.771 "reset": true, 00:24:17.771 "nvme_admin": false, 00:24:17.771 "nvme_io": false, 00:24:17.771 "nvme_io_md": false, 00:24:17.771 "write_zeroes": true, 00:24:17.771 "zcopy": true, 00:24:17.771 "get_zone_info": false, 00:24:17.771 "zone_management": false, 00:24:17.771 "zone_append": false, 00:24:17.771 "compare": false, 00:24:17.771 "compare_and_write": false, 00:24:17.771 "abort": true, 00:24:17.771 "seek_hole": false, 00:24:17.771 "seek_data": false, 00:24:17.771 "copy": true, 00:24:17.771 "nvme_iov_md": false 00:24:17.771 }, 00:24:17.771 "memory_domains": [ 00:24:17.771 { 00:24:17.771 "dma_device_id": "system", 00:24:17.771 "dma_device_type": 1 00:24:17.771 }, 00:24:17.771 { 00:24:17.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:17.771 "dma_device_type": 2 00:24:17.771 } 00:24:17.771 ], 00:24:17.771 "driver_specific": {} 00:24:17.771 }' 00:24:17.771 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:18.030 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:18.030 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:18.030 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:18.030 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:18.030 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:18.030 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:18.030 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:18.290 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:18.290 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:18.290 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:18.290 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:18.290 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:18.290 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:18.290 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:18.550 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:18.550 "name": "BaseBdev4", 00:24:18.550 "aliases": [ 00:24:18.550 "ce5560f0-d896-4c51-a6d4-495226994de4" 00:24:18.550 ], 00:24:18.550 "product_name": "Malloc disk", 00:24:18.550 "block_size": 512, 00:24:18.550 "num_blocks": 65536, 00:24:18.550 "uuid": "ce5560f0-d896-4c51-a6d4-495226994de4", 00:24:18.550 "assigned_rate_limits": { 00:24:18.550 "rw_ios_per_sec": 0, 00:24:18.550 "rw_mbytes_per_sec": 0, 00:24:18.550 "r_mbytes_per_sec": 0, 00:24:18.550 "w_mbytes_per_sec": 0 00:24:18.550 }, 00:24:18.550 "claimed": true, 00:24:18.550 "claim_type": "exclusive_write", 00:24:18.550 "zoned": false, 00:24:18.550 "supported_io_types": { 00:24:18.550 "read": true, 00:24:18.550 "write": true, 00:24:18.550 "unmap": true, 00:24:18.550 "flush": true, 00:24:18.550 "reset": true, 00:24:18.550 "nvme_admin": false, 00:24:18.550 "nvme_io": false, 00:24:18.550 "nvme_io_md": false, 00:24:18.550 "write_zeroes": true, 00:24:18.550 "zcopy": true, 00:24:18.550 "get_zone_info": false, 00:24:18.550 "zone_management": false, 00:24:18.550 "zone_append": false, 00:24:18.550 "compare": false, 00:24:18.550 "compare_and_write": false, 00:24:18.550 "abort": true, 00:24:18.550 "seek_hole": false, 00:24:18.550 "seek_data": false, 00:24:18.550 "copy": true, 00:24:18.550 "nvme_iov_md": false 00:24:18.550 }, 00:24:18.550 "memory_domains": [ 00:24:18.550 { 00:24:18.550 "dma_device_id": "system", 00:24:18.550 "dma_device_type": 1 00:24:18.550 }, 00:24:18.550 { 00:24:18.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:18.550 "dma_device_type": 2 00:24:18.550 } 00:24:18.550 ], 00:24:18.550 "driver_specific": {} 00:24:18.550 }' 00:24:18.550 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:18.550 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:18.550 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:18.550 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:18.550 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:18.809 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:18.809 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:18.809 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:18.809 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:18.809 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:18.809 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:18.809 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:18.809 06:55:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:19.068 [2024-08-14 06:55:46.198632] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.068 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.327 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:19.327 "name": "Existed_Raid", 00:24:19.327 "uuid": "3239eab2-e431-4c1c-8e07-6ff6fceebaeb", 00:24:19.327 "strip_size_kb": 64, 00:24:19.327 "state": "online", 00:24:19.327 "raid_level": "raid5f", 00:24:19.327 "superblock": true, 00:24:19.327 "num_base_bdevs": 4, 00:24:19.327 "num_base_bdevs_discovered": 3, 00:24:19.327 "num_base_bdevs_operational": 3, 00:24:19.327 "base_bdevs_list": [ 00:24:19.327 { 00:24:19.327 "name": null, 00:24:19.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.327 "is_configured": false, 00:24:19.327 "data_offset": 2048, 00:24:19.327 "data_size": 63488 00:24:19.327 }, 00:24:19.327 { 00:24:19.327 "name": "BaseBdev2", 00:24:19.327 "uuid": "6b9b80ca-7ed0-43eb-b1e6-4856b02fa0c1", 00:24:19.327 "is_configured": true, 00:24:19.327 "data_offset": 2048, 00:24:19.327 "data_size": 63488 00:24:19.327 }, 00:24:19.327 { 00:24:19.327 "name": "BaseBdev3", 00:24:19.327 "uuid": "68a87f4f-f85b-41d5-900d-aee62025873a", 00:24:19.327 "is_configured": true, 00:24:19.327 "data_offset": 2048, 00:24:19.327 "data_size": 63488 00:24:19.327 }, 00:24:19.327 { 00:24:19.327 "name": "BaseBdev4", 00:24:19.327 "uuid": "ce5560f0-d896-4c51-a6d4-495226994de4", 00:24:19.327 "is_configured": true, 00:24:19.327 "data_offset": 2048, 00:24:19.327 "data_size": 63488 00:24:19.327 } 00:24:19.327 ] 00:24:19.327 }' 00:24:19.327 06:55:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:19.327 06:55:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.972 06:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:19.972 06:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:19.972 06:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.972 06:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:20.230 06:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:20.230 06:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:20.230 06:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:20.489 [2024-08-14 06:55:47.672097] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:20.489 [2024-08-14 06:55:47.672311] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:20.489 [2024-08-14 06:55:47.684052] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:20.489 06:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:20.489 06:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:20.489 06:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.489 06:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:20.749 06:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:20.749 06:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:20.749 06:55:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:21.009 [2024-08-14 06:55:48.155384] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:21.009 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:21.009 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:21.009 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.009 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:21.268 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:21.268 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:21.268 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:21.528 [2024-08-14 06:55:48.670357] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:21.528 [2024-08-14 06:55:48.670434] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:24:21.528 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:21.528 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:21.528 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.528 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:21.787 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:21.787 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:21.787 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:21.787 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:21.787 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:21.787 06:55:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:22.048 BaseBdev2 00:24:22.048 06:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:22.048 06:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:22.048 06:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:22.048 06:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:22.048 06:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:22.048 06:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:22.048 06:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:22.307 06:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:22.566 [ 00:24:22.566 { 00:24:22.566 "name": "BaseBdev2", 00:24:22.566 "aliases": [ 00:24:22.566 "54323e56-5cbc-4b0b-95b6-e11d6f369a4f" 00:24:22.566 ], 00:24:22.566 "product_name": "Malloc disk", 00:24:22.566 "block_size": 512, 00:24:22.566 "num_blocks": 65536, 00:24:22.566 "uuid": "54323e56-5cbc-4b0b-95b6-e11d6f369a4f", 00:24:22.566 "assigned_rate_limits": { 00:24:22.566 "rw_ios_per_sec": 0, 00:24:22.566 "rw_mbytes_per_sec": 0, 00:24:22.566 "r_mbytes_per_sec": 0, 00:24:22.566 "w_mbytes_per_sec": 0 00:24:22.566 }, 00:24:22.566 "claimed": false, 00:24:22.566 "zoned": false, 00:24:22.566 "supported_io_types": { 00:24:22.566 "read": true, 00:24:22.566 "write": true, 00:24:22.566 "unmap": true, 00:24:22.566 "flush": true, 00:24:22.566 "reset": true, 00:24:22.566 "nvme_admin": false, 00:24:22.566 "nvme_io": false, 00:24:22.566 "nvme_io_md": false, 00:24:22.566 "write_zeroes": true, 00:24:22.566 "zcopy": true, 00:24:22.566 "get_zone_info": false, 00:24:22.566 "zone_management": false, 00:24:22.566 "zone_append": false, 00:24:22.566 "compare": false, 00:24:22.566 "compare_and_write": false, 00:24:22.566 "abort": true, 00:24:22.566 "seek_hole": false, 00:24:22.566 "seek_data": false, 00:24:22.566 "copy": true, 00:24:22.566 "nvme_iov_md": false 00:24:22.566 }, 00:24:22.566 "memory_domains": [ 00:24:22.566 { 00:24:22.566 "dma_device_id": "system", 00:24:22.566 "dma_device_type": 1 00:24:22.566 }, 00:24:22.566 { 00:24:22.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.566 "dma_device_type": 2 00:24:22.566 } 00:24:22.566 ], 00:24:22.566 "driver_specific": {} 00:24:22.566 } 00:24:22.566 ] 00:24:22.566 06:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:22.566 06:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:22.566 06:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:22.566 06:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:22.825 BaseBdev3 00:24:22.825 06:55:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:22.825 06:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:24:22.825 06:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:22.825 06:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:22.825 06:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:22.825 06:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:22.825 06:55:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:23.085 06:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:23.345 [ 00:24:23.346 { 00:24:23.346 "name": "BaseBdev3", 00:24:23.346 "aliases": [ 00:24:23.346 "6746263c-2337-499d-8fbd-56d478b0e163" 00:24:23.346 ], 00:24:23.346 "product_name": "Malloc disk", 00:24:23.346 "block_size": 512, 00:24:23.346 "num_blocks": 65536, 00:24:23.346 "uuid": "6746263c-2337-499d-8fbd-56d478b0e163", 00:24:23.346 "assigned_rate_limits": { 00:24:23.346 "rw_ios_per_sec": 0, 00:24:23.346 "rw_mbytes_per_sec": 0, 00:24:23.346 "r_mbytes_per_sec": 0, 00:24:23.346 "w_mbytes_per_sec": 0 00:24:23.346 }, 00:24:23.346 "claimed": false, 00:24:23.346 "zoned": false, 00:24:23.346 "supported_io_types": { 00:24:23.346 "read": true, 00:24:23.346 "write": true, 00:24:23.346 "unmap": true, 00:24:23.346 "flush": true, 00:24:23.346 "reset": true, 00:24:23.346 "nvme_admin": false, 00:24:23.346 "nvme_io": false, 00:24:23.346 "nvme_io_md": false, 00:24:23.346 "write_zeroes": true, 00:24:23.346 "zcopy": true, 00:24:23.346 "get_zone_info": false, 00:24:23.346 "zone_management": false, 00:24:23.346 "zone_append": false, 00:24:23.346 "compare": false, 00:24:23.346 "compare_and_write": false, 00:24:23.346 "abort": true, 00:24:23.346 "seek_hole": false, 00:24:23.346 "seek_data": false, 00:24:23.346 "copy": true, 00:24:23.346 "nvme_iov_md": false 00:24:23.346 }, 00:24:23.346 "memory_domains": [ 00:24:23.346 { 00:24:23.346 "dma_device_id": "system", 00:24:23.346 "dma_device_type": 1 00:24:23.346 }, 00:24:23.346 { 00:24:23.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:23.346 "dma_device_type": 2 00:24:23.346 } 00:24:23.346 ], 00:24:23.346 "driver_specific": {} 00:24:23.346 } 00:24:23.346 ] 00:24:23.346 06:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:23.346 06:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:23.346 06:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:23.346 06:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:23.605 BaseBdev4 00:24:23.605 06:55:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:23.605 06:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:24:23.605 06:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:23.605 06:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:23.605 06:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:23.605 06:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:23.605 06:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:23.911 06:55:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:23.911 [ 00:24:23.911 { 00:24:23.911 "name": "BaseBdev4", 00:24:23.911 "aliases": [ 00:24:23.911 "f36e5e40-569d-4c5a-8c68-ac4185bce243" 00:24:23.911 ], 00:24:23.911 "product_name": "Malloc disk", 00:24:23.911 "block_size": 512, 00:24:23.911 "num_blocks": 65536, 00:24:23.911 "uuid": "f36e5e40-569d-4c5a-8c68-ac4185bce243", 00:24:23.911 "assigned_rate_limits": { 00:24:23.911 "rw_ios_per_sec": 0, 00:24:23.911 "rw_mbytes_per_sec": 0, 00:24:23.911 "r_mbytes_per_sec": 0, 00:24:23.911 "w_mbytes_per_sec": 0 00:24:23.911 }, 00:24:23.911 "claimed": false, 00:24:23.911 "zoned": false, 00:24:23.911 "supported_io_types": { 00:24:23.911 "read": true, 00:24:23.911 "write": true, 00:24:23.911 "unmap": true, 00:24:23.911 "flush": true, 00:24:23.911 "reset": true, 00:24:23.911 "nvme_admin": false, 00:24:23.911 "nvme_io": false, 00:24:23.911 "nvme_io_md": false, 00:24:23.911 "write_zeroes": true, 00:24:23.911 "zcopy": true, 00:24:23.911 "get_zone_info": false, 00:24:23.911 "zone_management": false, 00:24:23.911 "zone_append": false, 00:24:23.911 "compare": false, 00:24:23.911 "compare_and_write": false, 00:24:23.911 "abort": true, 00:24:23.911 "seek_hole": false, 00:24:23.911 "seek_data": false, 00:24:23.911 "copy": true, 00:24:23.911 "nvme_iov_md": false 00:24:23.911 }, 00:24:23.911 "memory_domains": [ 00:24:23.911 { 00:24:23.911 "dma_device_id": "system", 00:24:23.911 "dma_device_type": 1 00:24:23.911 }, 00:24:23.911 { 00:24:23.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:23.911 "dma_device_type": 2 00:24:23.911 } 00:24:23.911 ], 00:24:23.911 "driver_specific": {} 00:24:23.911 } 00:24:23.911 ] 00:24:23.911 06:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:23.911 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:23.911 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:23.911 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:24.172 [2024-08-14 06:55:51.374581] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:24.172 [2024-08-14 06:55:51.375242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:24.172 [2024-08-14 06:55:51.375331] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:24.172 [2024-08-14 06:55:51.377418] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:24.172 [2024-08-14 06:55:51.377540] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:24.172 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:24.172 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:24.172 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:24.172 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:24.172 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:24.172 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:24.172 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:24.172 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:24.172 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:24.172 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:24.172 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.172 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:24.432 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:24.432 "name": "Existed_Raid", 00:24:24.432 "uuid": "9aa42a35-8eaa-4404-bc2d-e1e579a98f4a", 00:24:24.432 "strip_size_kb": 64, 00:24:24.432 "state": "configuring", 00:24:24.432 "raid_level": "raid5f", 00:24:24.432 "superblock": true, 00:24:24.432 "num_base_bdevs": 4, 00:24:24.432 "num_base_bdevs_discovered": 3, 00:24:24.432 "num_base_bdevs_operational": 4, 00:24:24.432 "base_bdevs_list": [ 00:24:24.432 { 00:24:24.432 "name": "BaseBdev1", 00:24:24.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.432 "is_configured": false, 00:24:24.432 "data_offset": 0, 00:24:24.432 "data_size": 0 00:24:24.432 }, 00:24:24.432 { 00:24:24.432 "name": "BaseBdev2", 00:24:24.432 "uuid": "54323e56-5cbc-4b0b-95b6-e11d6f369a4f", 00:24:24.432 "is_configured": true, 00:24:24.432 "data_offset": 2048, 00:24:24.432 "data_size": 63488 00:24:24.432 }, 00:24:24.432 { 00:24:24.432 "name": "BaseBdev3", 00:24:24.432 "uuid": "6746263c-2337-499d-8fbd-56d478b0e163", 00:24:24.432 "is_configured": true, 00:24:24.432 "data_offset": 2048, 00:24:24.432 "data_size": 63488 00:24:24.432 }, 00:24:24.432 { 00:24:24.432 "name": "BaseBdev4", 00:24:24.432 "uuid": "f36e5e40-569d-4c5a-8c68-ac4185bce243", 00:24:24.432 "is_configured": true, 00:24:24.432 "data_offset": 2048, 00:24:24.432 "data_size": 63488 00:24:24.432 } 00:24:24.432 ] 00:24:24.432 }' 00:24:24.432 06:55:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:24.432 06:55:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.001 06:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:25.259 [2024-08-14 06:55:52.456879] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:25.259 06:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:25.259 06:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:25.259 06:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:25.259 06:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:25.259 06:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:25.259 06:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:25.259 06:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:25.259 06:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:25.259 06:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:25.259 06:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:25.259 06:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.259 06:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.519 06:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:25.519 "name": "Existed_Raid", 00:24:25.519 "uuid": "9aa42a35-8eaa-4404-bc2d-e1e579a98f4a", 00:24:25.519 "strip_size_kb": 64, 00:24:25.519 "state": "configuring", 00:24:25.519 "raid_level": "raid5f", 00:24:25.519 "superblock": true, 00:24:25.519 "num_base_bdevs": 4, 00:24:25.519 "num_base_bdevs_discovered": 2, 00:24:25.519 "num_base_bdevs_operational": 4, 00:24:25.519 "base_bdevs_list": [ 00:24:25.519 { 00:24:25.519 "name": "BaseBdev1", 00:24:25.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.519 "is_configured": false, 00:24:25.519 "data_offset": 0, 00:24:25.519 "data_size": 0 00:24:25.519 }, 00:24:25.519 { 00:24:25.519 "name": null, 00:24:25.519 "uuid": "54323e56-5cbc-4b0b-95b6-e11d6f369a4f", 00:24:25.519 "is_configured": false, 00:24:25.519 "data_offset": 2048, 00:24:25.519 "data_size": 63488 00:24:25.519 }, 00:24:25.519 { 00:24:25.519 "name": "BaseBdev3", 00:24:25.519 "uuid": "6746263c-2337-499d-8fbd-56d478b0e163", 00:24:25.519 "is_configured": true, 00:24:25.519 "data_offset": 2048, 00:24:25.519 "data_size": 63488 00:24:25.519 }, 00:24:25.519 { 00:24:25.519 "name": "BaseBdev4", 00:24:25.519 "uuid": "f36e5e40-569d-4c5a-8c68-ac4185bce243", 00:24:25.519 "is_configured": true, 00:24:25.519 "data_offset": 2048, 00:24:25.519 "data_size": 63488 00:24:25.519 } 00:24:25.519 ] 00:24:25.519 }' 00:24:25.519 06:55:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:25.519 06:55:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.454 06:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.454 06:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:26.454 06:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:26.454 06:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:26.712 [2024-08-14 06:55:53.829452] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:26.712 BaseBdev1 00:24:26.712 06:55:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:26.712 06:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:24:26.712 06:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:26.712 06:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:26.712 06:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:26.713 06:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:26.713 06:55:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:26.970 06:55:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:27.229 [ 00:24:27.229 { 00:24:27.229 "name": "BaseBdev1", 00:24:27.229 "aliases": [ 00:24:27.229 "42dce9ed-f773-433e-8e27-b664ce8568fb" 00:24:27.229 ], 00:24:27.229 "product_name": "Malloc disk", 00:24:27.229 "block_size": 512, 00:24:27.229 "num_blocks": 65536, 00:24:27.229 "uuid": "42dce9ed-f773-433e-8e27-b664ce8568fb", 00:24:27.229 "assigned_rate_limits": { 00:24:27.229 "rw_ios_per_sec": 0, 00:24:27.229 "rw_mbytes_per_sec": 0, 00:24:27.229 "r_mbytes_per_sec": 0, 00:24:27.229 "w_mbytes_per_sec": 0 00:24:27.229 }, 00:24:27.229 "claimed": true, 00:24:27.229 "claim_type": "exclusive_write", 00:24:27.229 "zoned": false, 00:24:27.229 "supported_io_types": { 00:24:27.229 "read": true, 00:24:27.229 "write": true, 00:24:27.229 "unmap": true, 00:24:27.229 "flush": true, 00:24:27.229 "reset": true, 00:24:27.229 "nvme_admin": false, 00:24:27.229 "nvme_io": false, 00:24:27.229 "nvme_io_md": false, 00:24:27.229 "write_zeroes": true, 00:24:27.229 "zcopy": true, 00:24:27.229 "get_zone_info": false, 00:24:27.229 "zone_management": false, 00:24:27.229 "zone_append": false, 00:24:27.229 "compare": false, 00:24:27.229 "compare_and_write": false, 00:24:27.229 "abort": true, 00:24:27.229 "seek_hole": false, 00:24:27.229 "seek_data": false, 00:24:27.229 "copy": true, 00:24:27.229 "nvme_iov_md": false 00:24:27.229 }, 00:24:27.229 "memory_domains": [ 00:24:27.229 { 00:24:27.229 "dma_device_id": "system", 00:24:27.229 "dma_device_type": 1 00:24:27.229 }, 00:24:27.229 { 00:24:27.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:27.229 "dma_device_type": 2 00:24:27.229 } 00:24:27.229 ], 00:24:27.229 "driver_specific": {} 00:24:27.229 } 00:24:27.229 ] 00:24:27.229 06:55:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:27.229 06:55:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:27.229 06:55:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:27.229 06:55:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:27.229 06:55:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:27.229 06:55:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:27.229 06:55:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:27.229 06:55:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:27.229 06:55:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:27.229 06:55:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:27.229 06:55:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:27.229 06:55:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.229 06:55:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.488 06:55:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:27.488 "name": "Existed_Raid", 00:24:27.488 "uuid": "9aa42a35-8eaa-4404-bc2d-e1e579a98f4a", 00:24:27.488 "strip_size_kb": 64, 00:24:27.488 "state": "configuring", 00:24:27.488 "raid_level": "raid5f", 00:24:27.488 "superblock": true, 00:24:27.488 "num_base_bdevs": 4, 00:24:27.488 "num_base_bdevs_discovered": 3, 00:24:27.488 "num_base_bdevs_operational": 4, 00:24:27.488 "base_bdevs_list": [ 00:24:27.488 { 00:24:27.488 "name": "BaseBdev1", 00:24:27.488 "uuid": "42dce9ed-f773-433e-8e27-b664ce8568fb", 00:24:27.488 "is_configured": true, 00:24:27.488 "data_offset": 2048, 00:24:27.488 "data_size": 63488 00:24:27.488 }, 00:24:27.488 { 00:24:27.488 "name": null, 00:24:27.488 "uuid": "54323e56-5cbc-4b0b-95b6-e11d6f369a4f", 00:24:27.488 "is_configured": false, 00:24:27.488 "data_offset": 2048, 00:24:27.488 "data_size": 63488 00:24:27.488 }, 00:24:27.488 { 00:24:27.488 "name": "BaseBdev3", 00:24:27.488 "uuid": "6746263c-2337-499d-8fbd-56d478b0e163", 00:24:27.488 "is_configured": true, 00:24:27.488 "data_offset": 2048, 00:24:27.488 "data_size": 63488 00:24:27.488 }, 00:24:27.488 { 00:24:27.488 "name": "BaseBdev4", 00:24:27.488 "uuid": "f36e5e40-569d-4c5a-8c68-ac4185bce243", 00:24:27.488 "is_configured": true, 00:24:27.488 "data_offset": 2048, 00:24:27.488 "data_size": 63488 00:24:27.488 } 00:24:27.488 ] 00:24:27.488 }' 00:24:27.488 06:55:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:27.488 06:55:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.060 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:28.060 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.325 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:28.325 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:28.325 [2024-08-14 06:55:55.558636] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:28.583 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:28.583 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:28.583 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:28.583 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:28.583 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:28.583 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:28.583 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:28.583 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:28.583 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:28.583 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:28.583 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.583 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.841 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:28.841 "name": "Existed_Raid", 00:24:28.841 "uuid": "9aa42a35-8eaa-4404-bc2d-e1e579a98f4a", 00:24:28.841 "strip_size_kb": 64, 00:24:28.841 "state": "configuring", 00:24:28.841 "raid_level": "raid5f", 00:24:28.841 "superblock": true, 00:24:28.841 "num_base_bdevs": 4, 00:24:28.841 "num_base_bdevs_discovered": 2, 00:24:28.841 "num_base_bdevs_operational": 4, 00:24:28.841 "base_bdevs_list": [ 00:24:28.841 { 00:24:28.841 "name": "BaseBdev1", 00:24:28.841 "uuid": "42dce9ed-f773-433e-8e27-b664ce8568fb", 00:24:28.841 "is_configured": true, 00:24:28.841 "data_offset": 2048, 00:24:28.841 "data_size": 63488 00:24:28.841 }, 00:24:28.841 { 00:24:28.841 "name": null, 00:24:28.841 "uuid": "54323e56-5cbc-4b0b-95b6-e11d6f369a4f", 00:24:28.841 "is_configured": false, 00:24:28.841 "data_offset": 2048, 00:24:28.841 "data_size": 63488 00:24:28.841 }, 00:24:28.841 { 00:24:28.841 "name": null, 00:24:28.841 "uuid": "6746263c-2337-499d-8fbd-56d478b0e163", 00:24:28.841 "is_configured": false, 00:24:28.841 "data_offset": 2048, 00:24:28.841 "data_size": 63488 00:24:28.841 }, 00:24:28.841 { 00:24:28.841 "name": "BaseBdev4", 00:24:28.841 "uuid": "f36e5e40-569d-4c5a-8c68-ac4185bce243", 00:24:28.841 "is_configured": true, 00:24:28.841 "data_offset": 2048, 00:24:28.841 "data_size": 63488 00:24:28.841 } 00:24:28.841 ] 00:24:28.841 }' 00:24:28.841 06:55:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:28.841 06:55:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.408 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.408 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:29.667 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:29.667 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:29.926 [2024-08-14 06:55:56.956585] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:29.926 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:29.926 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:29.926 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:29.926 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:29.926 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:29.926 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:29.926 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:29.926 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:29.926 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:29.926 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:29.926 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:29.926 06:55:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.185 06:55:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:30.185 "name": "Existed_Raid", 00:24:30.185 "uuid": "9aa42a35-8eaa-4404-bc2d-e1e579a98f4a", 00:24:30.185 "strip_size_kb": 64, 00:24:30.185 "state": "configuring", 00:24:30.185 "raid_level": "raid5f", 00:24:30.185 "superblock": true, 00:24:30.185 "num_base_bdevs": 4, 00:24:30.185 "num_base_bdevs_discovered": 3, 00:24:30.185 "num_base_bdevs_operational": 4, 00:24:30.185 "base_bdevs_list": [ 00:24:30.185 { 00:24:30.185 "name": "BaseBdev1", 00:24:30.185 "uuid": "42dce9ed-f773-433e-8e27-b664ce8568fb", 00:24:30.185 "is_configured": true, 00:24:30.185 "data_offset": 2048, 00:24:30.185 "data_size": 63488 00:24:30.185 }, 00:24:30.185 { 00:24:30.185 "name": null, 00:24:30.185 "uuid": "54323e56-5cbc-4b0b-95b6-e11d6f369a4f", 00:24:30.185 "is_configured": false, 00:24:30.185 "data_offset": 2048, 00:24:30.185 "data_size": 63488 00:24:30.185 }, 00:24:30.185 { 00:24:30.185 "name": "BaseBdev3", 00:24:30.185 "uuid": "6746263c-2337-499d-8fbd-56d478b0e163", 00:24:30.185 "is_configured": true, 00:24:30.185 "data_offset": 2048, 00:24:30.185 "data_size": 63488 00:24:30.185 }, 00:24:30.185 { 00:24:30.185 "name": "BaseBdev4", 00:24:30.185 "uuid": "f36e5e40-569d-4c5a-8c68-ac4185bce243", 00:24:30.185 "is_configured": true, 00:24:30.185 "data_offset": 2048, 00:24:30.185 "data_size": 63488 00:24:30.185 } 00:24:30.185 ] 00:24:30.185 }' 00:24:30.185 06:55:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:30.185 06:55:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.752 06:55:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:30.752 06:55:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.011 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:31.011 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:31.269 [2024-08-14 06:55:58.302441] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:31.269 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:31.269 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:31.269 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:31.269 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:31.269 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:31.269 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:31.269 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:31.269 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:31.269 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:31.269 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:31.269 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:31.269 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.528 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:31.528 "name": "Existed_Raid", 00:24:31.528 "uuid": "9aa42a35-8eaa-4404-bc2d-e1e579a98f4a", 00:24:31.528 "strip_size_kb": 64, 00:24:31.528 "state": "configuring", 00:24:31.528 "raid_level": "raid5f", 00:24:31.528 "superblock": true, 00:24:31.528 "num_base_bdevs": 4, 00:24:31.528 "num_base_bdevs_discovered": 2, 00:24:31.528 "num_base_bdevs_operational": 4, 00:24:31.528 "base_bdevs_list": [ 00:24:31.528 { 00:24:31.528 "name": null, 00:24:31.528 "uuid": "42dce9ed-f773-433e-8e27-b664ce8568fb", 00:24:31.528 "is_configured": false, 00:24:31.528 "data_offset": 2048, 00:24:31.528 "data_size": 63488 00:24:31.528 }, 00:24:31.528 { 00:24:31.528 "name": null, 00:24:31.528 "uuid": "54323e56-5cbc-4b0b-95b6-e11d6f369a4f", 00:24:31.528 "is_configured": false, 00:24:31.528 "data_offset": 2048, 00:24:31.528 "data_size": 63488 00:24:31.528 }, 00:24:31.528 { 00:24:31.528 "name": "BaseBdev3", 00:24:31.528 "uuid": "6746263c-2337-499d-8fbd-56d478b0e163", 00:24:31.528 "is_configured": true, 00:24:31.528 "data_offset": 2048, 00:24:31.528 "data_size": 63488 00:24:31.528 }, 00:24:31.528 { 00:24:31.528 "name": "BaseBdev4", 00:24:31.528 "uuid": "f36e5e40-569d-4c5a-8c68-ac4185bce243", 00:24:31.528 "is_configured": true, 00:24:31.528 "data_offset": 2048, 00:24:31.528 "data_size": 63488 00:24:31.528 } 00:24:31.528 ] 00:24:31.528 }' 00:24:31.528 06:55:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:31.528 06:55:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.097 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:32.097 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.357 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:32.357 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:32.616 [2024-08-14 06:55:59.675536] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:32.616 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:32.616 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:32.616 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:32.616 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:32.616 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:32.616 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:32.616 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:32.616 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:32.616 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:32.616 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:32.616 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.616 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:32.884 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:32.884 "name": "Existed_Raid", 00:24:32.884 "uuid": "9aa42a35-8eaa-4404-bc2d-e1e579a98f4a", 00:24:32.884 "strip_size_kb": 64, 00:24:32.884 "state": "configuring", 00:24:32.884 "raid_level": "raid5f", 00:24:32.884 "superblock": true, 00:24:32.884 "num_base_bdevs": 4, 00:24:32.884 "num_base_bdevs_discovered": 3, 00:24:32.884 "num_base_bdevs_operational": 4, 00:24:32.884 "base_bdevs_list": [ 00:24:32.884 { 00:24:32.884 "name": null, 00:24:32.884 "uuid": "42dce9ed-f773-433e-8e27-b664ce8568fb", 00:24:32.884 "is_configured": false, 00:24:32.884 "data_offset": 2048, 00:24:32.884 "data_size": 63488 00:24:32.884 }, 00:24:32.884 { 00:24:32.884 "name": "BaseBdev2", 00:24:32.884 "uuid": "54323e56-5cbc-4b0b-95b6-e11d6f369a4f", 00:24:32.884 "is_configured": true, 00:24:32.884 "data_offset": 2048, 00:24:32.884 "data_size": 63488 00:24:32.884 }, 00:24:32.884 { 00:24:32.884 "name": "BaseBdev3", 00:24:32.884 "uuid": "6746263c-2337-499d-8fbd-56d478b0e163", 00:24:32.884 "is_configured": true, 00:24:32.884 "data_offset": 2048, 00:24:32.884 "data_size": 63488 00:24:32.884 }, 00:24:32.884 { 00:24:32.884 "name": "BaseBdev4", 00:24:32.884 "uuid": "f36e5e40-569d-4c5a-8c68-ac4185bce243", 00:24:32.884 "is_configured": true, 00:24:32.884 "data_offset": 2048, 00:24:32.884 "data_size": 63488 00:24:32.884 } 00:24:32.885 ] 00:24:32.885 }' 00:24:32.885 06:55:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:32.885 06:55:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.469 06:56:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:33.469 06:56:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.728 06:56:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:33.728 06:56:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:33.728 06:56:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.987 06:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 42dce9ed-f773-433e-8e27-b664ce8568fb 00:24:34.247 [2024-08-14 06:56:01.292304] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:34.247 [2024-08-14 06:56:01.292609] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:24:34.247 [2024-08-14 06:56:01.292693] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:34.247 [2024-08-14 06:56:01.293018] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:24:34.247 [2024-08-14 06:56:01.293605] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:24:34.247 [2024-08-14 06:56:01.293662] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:24:34.247 NewBaseBdev 00:24:34.247 [2024-08-14 06:56:01.293832] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:34.247 06:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:34.247 06:56:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:24:34.247 06:56:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:34.247 06:56:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:34.247 06:56:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:34.247 06:56:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:34.247 06:56:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:34.506 06:56:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:34.765 [ 00:24:34.765 { 00:24:34.765 "name": "NewBaseBdev", 00:24:34.765 "aliases": [ 00:24:34.765 "42dce9ed-f773-433e-8e27-b664ce8568fb" 00:24:34.765 ], 00:24:34.765 "product_name": "Malloc disk", 00:24:34.765 "block_size": 512, 00:24:34.765 "num_blocks": 65536, 00:24:34.765 "uuid": "42dce9ed-f773-433e-8e27-b664ce8568fb", 00:24:34.765 "assigned_rate_limits": { 00:24:34.765 "rw_ios_per_sec": 0, 00:24:34.765 "rw_mbytes_per_sec": 0, 00:24:34.766 "r_mbytes_per_sec": 0, 00:24:34.766 "w_mbytes_per_sec": 0 00:24:34.766 }, 00:24:34.766 "claimed": true, 00:24:34.766 "claim_type": "exclusive_write", 00:24:34.766 "zoned": false, 00:24:34.766 "supported_io_types": { 00:24:34.766 "read": true, 00:24:34.766 "write": true, 00:24:34.766 "unmap": true, 00:24:34.766 "flush": true, 00:24:34.766 "reset": true, 00:24:34.766 "nvme_admin": false, 00:24:34.766 "nvme_io": false, 00:24:34.766 "nvme_io_md": false, 00:24:34.766 "write_zeroes": true, 00:24:34.766 "zcopy": true, 00:24:34.766 "get_zone_info": false, 00:24:34.766 "zone_management": false, 00:24:34.766 "zone_append": false, 00:24:34.766 "compare": false, 00:24:34.766 "compare_and_write": false, 00:24:34.766 "abort": true, 00:24:34.766 "seek_hole": false, 00:24:34.766 "seek_data": false, 00:24:34.766 "copy": true, 00:24:34.766 "nvme_iov_md": false 00:24:34.766 }, 00:24:34.766 "memory_domains": [ 00:24:34.766 { 00:24:34.766 "dma_device_id": "system", 00:24:34.766 "dma_device_type": 1 00:24:34.766 }, 00:24:34.766 { 00:24:34.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.766 "dma_device_type": 2 00:24:34.766 } 00:24:34.766 ], 00:24:34.766 "driver_specific": {} 00:24:34.766 } 00:24:34.766 ] 00:24:34.766 06:56:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:34.766 06:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:34.766 06:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:34.766 06:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:34.766 06:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:34.766 06:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:34.766 06:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:34.766 06:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:34.766 06:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:34.766 06:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:34.766 06:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:34.766 06:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:34.766 06:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.042 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:35.042 "name": "Existed_Raid", 00:24:35.042 "uuid": "9aa42a35-8eaa-4404-bc2d-e1e579a98f4a", 00:24:35.042 "strip_size_kb": 64, 00:24:35.042 "state": "online", 00:24:35.042 "raid_level": "raid5f", 00:24:35.042 "superblock": true, 00:24:35.042 "num_base_bdevs": 4, 00:24:35.042 "num_base_bdevs_discovered": 4, 00:24:35.042 "num_base_bdevs_operational": 4, 00:24:35.042 "base_bdevs_list": [ 00:24:35.042 { 00:24:35.042 "name": "NewBaseBdev", 00:24:35.042 "uuid": "42dce9ed-f773-433e-8e27-b664ce8568fb", 00:24:35.042 "is_configured": true, 00:24:35.042 "data_offset": 2048, 00:24:35.042 "data_size": 63488 00:24:35.042 }, 00:24:35.042 { 00:24:35.042 "name": "BaseBdev2", 00:24:35.042 "uuid": "54323e56-5cbc-4b0b-95b6-e11d6f369a4f", 00:24:35.042 "is_configured": true, 00:24:35.042 "data_offset": 2048, 00:24:35.042 "data_size": 63488 00:24:35.042 }, 00:24:35.042 { 00:24:35.042 "name": "BaseBdev3", 00:24:35.042 "uuid": "6746263c-2337-499d-8fbd-56d478b0e163", 00:24:35.042 "is_configured": true, 00:24:35.042 "data_offset": 2048, 00:24:35.042 "data_size": 63488 00:24:35.042 }, 00:24:35.042 { 00:24:35.042 "name": "BaseBdev4", 00:24:35.042 "uuid": "f36e5e40-569d-4c5a-8c68-ac4185bce243", 00:24:35.042 "is_configured": true, 00:24:35.042 "data_offset": 2048, 00:24:35.042 "data_size": 63488 00:24:35.042 } 00:24:35.042 ] 00:24:35.042 }' 00:24:35.042 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:35.042 06:56:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.633 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:35.633 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:35.633 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:35.633 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:35.633 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:35.633 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:35.633 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:35.633 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:35.892 [2024-08-14 06:56:02.906389] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:35.892 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:35.892 "name": "Existed_Raid", 00:24:35.892 "aliases": [ 00:24:35.892 "9aa42a35-8eaa-4404-bc2d-e1e579a98f4a" 00:24:35.892 ], 00:24:35.892 "product_name": "Raid Volume", 00:24:35.892 "block_size": 512, 00:24:35.892 "num_blocks": 190464, 00:24:35.892 "uuid": "9aa42a35-8eaa-4404-bc2d-e1e579a98f4a", 00:24:35.892 "assigned_rate_limits": { 00:24:35.892 "rw_ios_per_sec": 0, 00:24:35.892 "rw_mbytes_per_sec": 0, 00:24:35.892 "r_mbytes_per_sec": 0, 00:24:35.892 "w_mbytes_per_sec": 0 00:24:35.892 }, 00:24:35.892 "claimed": false, 00:24:35.892 "zoned": false, 00:24:35.892 "supported_io_types": { 00:24:35.892 "read": true, 00:24:35.892 "write": true, 00:24:35.892 "unmap": false, 00:24:35.892 "flush": false, 00:24:35.892 "reset": true, 00:24:35.892 "nvme_admin": false, 00:24:35.892 "nvme_io": false, 00:24:35.892 "nvme_io_md": false, 00:24:35.892 "write_zeroes": true, 00:24:35.892 "zcopy": false, 00:24:35.892 "get_zone_info": false, 00:24:35.892 "zone_management": false, 00:24:35.892 "zone_append": false, 00:24:35.892 "compare": false, 00:24:35.892 "compare_and_write": false, 00:24:35.892 "abort": false, 00:24:35.892 "seek_hole": false, 00:24:35.892 "seek_data": false, 00:24:35.892 "copy": false, 00:24:35.892 "nvme_iov_md": false 00:24:35.892 }, 00:24:35.892 "driver_specific": { 00:24:35.892 "raid": { 00:24:35.892 "uuid": "9aa42a35-8eaa-4404-bc2d-e1e579a98f4a", 00:24:35.892 "strip_size_kb": 64, 00:24:35.892 "state": "online", 00:24:35.892 "raid_level": "raid5f", 00:24:35.892 "superblock": true, 00:24:35.892 "num_base_bdevs": 4, 00:24:35.892 "num_base_bdevs_discovered": 4, 00:24:35.892 "num_base_bdevs_operational": 4, 00:24:35.892 "base_bdevs_list": [ 00:24:35.892 { 00:24:35.892 "name": "NewBaseBdev", 00:24:35.892 "uuid": "42dce9ed-f773-433e-8e27-b664ce8568fb", 00:24:35.892 "is_configured": true, 00:24:35.892 "data_offset": 2048, 00:24:35.892 "data_size": 63488 00:24:35.892 }, 00:24:35.892 { 00:24:35.892 "name": "BaseBdev2", 00:24:35.892 "uuid": "54323e56-5cbc-4b0b-95b6-e11d6f369a4f", 00:24:35.892 "is_configured": true, 00:24:35.892 "data_offset": 2048, 00:24:35.892 "data_size": 63488 00:24:35.892 }, 00:24:35.892 { 00:24:35.892 "name": "BaseBdev3", 00:24:35.892 "uuid": "6746263c-2337-499d-8fbd-56d478b0e163", 00:24:35.892 "is_configured": true, 00:24:35.892 "data_offset": 2048, 00:24:35.892 "data_size": 63488 00:24:35.892 }, 00:24:35.892 { 00:24:35.892 "name": "BaseBdev4", 00:24:35.892 "uuid": "f36e5e40-569d-4c5a-8c68-ac4185bce243", 00:24:35.892 "is_configured": true, 00:24:35.892 "data_offset": 2048, 00:24:35.892 "data_size": 63488 00:24:35.892 } 00:24:35.892 ] 00:24:35.892 } 00:24:35.892 } 00:24:35.892 }' 00:24:35.892 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:35.892 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:35.892 BaseBdev2 00:24:35.892 BaseBdev3 00:24:35.892 BaseBdev4' 00:24:35.892 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:35.892 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:35.892 06:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:36.151 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:36.151 "name": "NewBaseBdev", 00:24:36.151 "aliases": [ 00:24:36.151 "42dce9ed-f773-433e-8e27-b664ce8568fb" 00:24:36.151 ], 00:24:36.151 "product_name": "Malloc disk", 00:24:36.151 "block_size": 512, 00:24:36.151 "num_blocks": 65536, 00:24:36.151 "uuid": "42dce9ed-f773-433e-8e27-b664ce8568fb", 00:24:36.151 "assigned_rate_limits": { 00:24:36.151 "rw_ios_per_sec": 0, 00:24:36.151 "rw_mbytes_per_sec": 0, 00:24:36.151 "r_mbytes_per_sec": 0, 00:24:36.151 "w_mbytes_per_sec": 0 00:24:36.151 }, 00:24:36.151 "claimed": true, 00:24:36.151 "claim_type": "exclusive_write", 00:24:36.151 "zoned": false, 00:24:36.151 "supported_io_types": { 00:24:36.151 "read": true, 00:24:36.151 "write": true, 00:24:36.151 "unmap": true, 00:24:36.151 "flush": true, 00:24:36.151 "reset": true, 00:24:36.151 "nvme_admin": false, 00:24:36.151 "nvme_io": false, 00:24:36.151 "nvme_io_md": false, 00:24:36.151 "write_zeroes": true, 00:24:36.151 "zcopy": true, 00:24:36.151 "get_zone_info": false, 00:24:36.151 "zone_management": false, 00:24:36.151 "zone_append": false, 00:24:36.151 "compare": false, 00:24:36.151 "compare_and_write": false, 00:24:36.151 "abort": true, 00:24:36.151 "seek_hole": false, 00:24:36.151 "seek_data": false, 00:24:36.151 "copy": true, 00:24:36.151 "nvme_iov_md": false 00:24:36.151 }, 00:24:36.151 "memory_domains": [ 00:24:36.151 { 00:24:36.151 "dma_device_id": "system", 00:24:36.151 "dma_device_type": 1 00:24:36.151 }, 00:24:36.151 { 00:24:36.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.151 "dma_device_type": 2 00:24:36.151 } 00:24:36.151 ], 00:24:36.151 "driver_specific": {} 00:24:36.151 }' 00:24:36.151 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:36.151 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:36.151 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:36.151 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:36.411 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:36.411 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:36.411 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:36.411 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:36.411 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:36.411 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:36.411 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:36.411 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:36.411 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:36.411 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:36.411 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:36.671 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:36.671 "name": "BaseBdev2", 00:24:36.671 "aliases": [ 00:24:36.671 "54323e56-5cbc-4b0b-95b6-e11d6f369a4f" 00:24:36.671 ], 00:24:36.671 "product_name": "Malloc disk", 00:24:36.671 "block_size": 512, 00:24:36.671 "num_blocks": 65536, 00:24:36.671 "uuid": "54323e56-5cbc-4b0b-95b6-e11d6f369a4f", 00:24:36.671 "assigned_rate_limits": { 00:24:36.671 "rw_ios_per_sec": 0, 00:24:36.671 "rw_mbytes_per_sec": 0, 00:24:36.671 "r_mbytes_per_sec": 0, 00:24:36.671 "w_mbytes_per_sec": 0 00:24:36.671 }, 00:24:36.671 "claimed": true, 00:24:36.671 "claim_type": "exclusive_write", 00:24:36.671 "zoned": false, 00:24:36.671 "supported_io_types": { 00:24:36.671 "read": true, 00:24:36.671 "write": true, 00:24:36.671 "unmap": true, 00:24:36.671 "flush": true, 00:24:36.671 "reset": true, 00:24:36.671 "nvme_admin": false, 00:24:36.671 "nvme_io": false, 00:24:36.671 "nvme_io_md": false, 00:24:36.671 "write_zeroes": true, 00:24:36.671 "zcopy": true, 00:24:36.671 "get_zone_info": false, 00:24:36.671 "zone_management": false, 00:24:36.671 "zone_append": false, 00:24:36.671 "compare": false, 00:24:36.671 "compare_and_write": false, 00:24:36.671 "abort": true, 00:24:36.671 "seek_hole": false, 00:24:36.671 "seek_data": false, 00:24:36.671 "copy": true, 00:24:36.671 "nvme_iov_md": false 00:24:36.671 }, 00:24:36.671 "memory_domains": [ 00:24:36.671 { 00:24:36.671 "dma_device_id": "system", 00:24:36.671 "dma_device_type": 1 00:24:36.671 }, 00:24:36.671 { 00:24:36.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.671 "dma_device_type": 2 00:24:36.671 } 00:24:36.671 ], 00:24:36.671 "driver_specific": {} 00:24:36.671 }' 00:24:36.671 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:36.930 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:36.930 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:36.930 06:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:36.930 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:36.930 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:36.930 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:36.930 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:37.189 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:37.189 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:37.189 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:37.189 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:37.189 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:37.189 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:37.189 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:37.449 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:37.449 "name": "BaseBdev3", 00:24:37.449 "aliases": [ 00:24:37.449 "6746263c-2337-499d-8fbd-56d478b0e163" 00:24:37.449 ], 00:24:37.449 "product_name": "Malloc disk", 00:24:37.449 "block_size": 512, 00:24:37.449 "num_blocks": 65536, 00:24:37.449 "uuid": "6746263c-2337-499d-8fbd-56d478b0e163", 00:24:37.449 "assigned_rate_limits": { 00:24:37.449 "rw_ios_per_sec": 0, 00:24:37.449 "rw_mbytes_per_sec": 0, 00:24:37.449 "r_mbytes_per_sec": 0, 00:24:37.449 "w_mbytes_per_sec": 0 00:24:37.449 }, 00:24:37.449 "claimed": true, 00:24:37.449 "claim_type": "exclusive_write", 00:24:37.449 "zoned": false, 00:24:37.449 "supported_io_types": { 00:24:37.449 "read": true, 00:24:37.449 "write": true, 00:24:37.449 "unmap": true, 00:24:37.449 "flush": true, 00:24:37.449 "reset": true, 00:24:37.449 "nvme_admin": false, 00:24:37.449 "nvme_io": false, 00:24:37.449 "nvme_io_md": false, 00:24:37.449 "write_zeroes": true, 00:24:37.449 "zcopy": true, 00:24:37.449 "get_zone_info": false, 00:24:37.449 "zone_management": false, 00:24:37.449 "zone_append": false, 00:24:37.449 "compare": false, 00:24:37.449 "compare_and_write": false, 00:24:37.449 "abort": true, 00:24:37.449 "seek_hole": false, 00:24:37.449 "seek_data": false, 00:24:37.449 "copy": true, 00:24:37.449 "nvme_iov_md": false 00:24:37.449 }, 00:24:37.449 "memory_domains": [ 00:24:37.449 { 00:24:37.449 "dma_device_id": "system", 00:24:37.449 "dma_device_type": 1 00:24:37.449 }, 00:24:37.449 { 00:24:37.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.449 "dma_device_type": 2 00:24:37.449 } 00:24:37.449 ], 00:24:37.449 "driver_specific": {} 00:24:37.449 }' 00:24:37.449 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:37.449 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:37.449 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:37.449 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:37.449 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:37.708 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:37.708 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:37.708 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:37.708 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:37.708 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:37.708 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:37.708 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:37.708 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:37.708 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:37.708 06:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:37.968 06:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:37.968 "name": "BaseBdev4", 00:24:37.968 "aliases": [ 00:24:37.968 "f36e5e40-569d-4c5a-8c68-ac4185bce243" 00:24:37.968 ], 00:24:37.968 "product_name": "Malloc disk", 00:24:37.968 "block_size": 512, 00:24:37.968 "num_blocks": 65536, 00:24:37.968 "uuid": "f36e5e40-569d-4c5a-8c68-ac4185bce243", 00:24:37.968 "assigned_rate_limits": { 00:24:37.968 "rw_ios_per_sec": 0, 00:24:37.968 "rw_mbytes_per_sec": 0, 00:24:37.968 "r_mbytes_per_sec": 0, 00:24:37.968 "w_mbytes_per_sec": 0 00:24:37.968 }, 00:24:37.968 "claimed": true, 00:24:37.968 "claim_type": "exclusive_write", 00:24:37.968 "zoned": false, 00:24:37.968 "supported_io_types": { 00:24:37.968 "read": true, 00:24:37.968 "write": true, 00:24:37.968 "unmap": true, 00:24:37.968 "flush": true, 00:24:37.968 "reset": true, 00:24:37.968 "nvme_admin": false, 00:24:37.968 "nvme_io": false, 00:24:37.968 "nvme_io_md": false, 00:24:37.968 "write_zeroes": true, 00:24:37.968 "zcopy": true, 00:24:37.968 "get_zone_info": false, 00:24:37.968 "zone_management": false, 00:24:37.968 "zone_append": false, 00:24:37.968 "compare": false, 00:24:37.968 "compare_and_write": false, 00:24:37.968 "abort": true, 00:24:37.968 "seek_hole": false, 00:24:37.968 "seek_data": false, 00:24:37.968 "copy": true, 00:24:37.968 "nvme_iov_md": false 00:24:37.968 }, 00:24:37.968 "memory_domains": [ 00:24:37.968 { 00:24:37.968 "dma_device_id": "system", 00:24:37.968 "dma_device_type": 1 00:24:37.968 }, 00:24:37.968 { 00:24:37.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.968 "dma_device_type": 2 00:24:37.968 } 00:24:37.968 ], 00:24:37.968 "driver_specific": {} 00:24:37.968 }' 00:24:37.968 06:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:37.968 06:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:37.968 06:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:37.968 06:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:38.228 06:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:38.228 06:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:38.228 06:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:38.228 06:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:38.228 06:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:38.228 06:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:38.228 06:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:38.486 06:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:38.486 06:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:38.486 [2024-08-14 06:56:05.684824] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:38.486 [2024-08-14 06:56:05.684858] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:38.486 [2024-08-14 06:56:05.684955] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:38.486 [2024-08-14 06:56:05.685265] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:38.486 [2024-08-14 06:56:05.685297] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:24:38.487 06:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 102867 00:24:38.487 06:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 102867 ']' 00:24:38.487 06:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 102867 00:24:38.487 06:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:24:38.487 06:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:38.487 06:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102867 00:24:38.487 06:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:38.487 killing process with pid 102867 00:24:38.487 06:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:38.487 06:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102867' 00:24:38.487 06:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 102867 00:24:38.487 [2024-08-14 06:56:05.731028] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:38.487 06:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 102867 00:24:38.751 [2024-08-14 06:56:05.771846] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:39.010 06:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:24:39.010 00:24:39.010 real 0m32.853s 00:24:39.010 user 1m1.259s 00:24:39.010 sys 0m4.781s 00:24:39.010 06:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:39.010 06:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:39.010 ************************************ 00:24:39.010 END TEST raid5f_state_function_test_sb 00:24:39.010 ************************************ 00:24:39.010 06:56:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:24:39.010 06:56:06 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:24:39.010 06:56:06 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:39.010 06:56:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:39.010 ************************************ 00:24:39.010 START TEST raid5f_superblock_test 00:24:39.010 ************************************ 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid5f 4 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid5f 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid5f '!=' raid1 ']' 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=103910 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 103910 /var/tmp/spdk-raid.sock 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 103910 ']' 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:39.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:39.010 06:56:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.010 [2024-08-14 06:56:06.177257] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:24:39.010 [2024-08-14 06:56:06.177471] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103910 ] 00:24:39.269 [2024-08-14 06:56:06.322975] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.269 [2024-08-14 06:56:06.374199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.269 [2024-08-14 06:56:06.418206] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:39.269 [2024-08-14 06:56:06.418242] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:39.836 06:56:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:39.836 06:56:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:24:39.836 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:24:39.836 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:39.836 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:24:39.836 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:24:39.836 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:39.836 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:39.836 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:39.836 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:39.836 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:40.094 malloc1 00:24:40.094 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:40.353 [2024-08-14 06:56:07.520583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:40.353 [2024-08-14 06:56:07.520767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:40.353 [2024-08-14 06:56:07.520824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:24:40.353 [2024-08-14 06:56:07.520859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:40.353 [2024-08-14 06:56:07.523387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:40.353 [2024-08-14 06:56:07.523480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:40.353 pt1 00:24:40.353 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:40.353 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:40.353 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:24:40.353 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:24:40.353 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:40.353 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:40.353 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:40.353 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:40.353 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:40.612 malloc2 00:24:40.612 06:56:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:40.872 [2024-08-14 06:56:08.045203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:40.872 [2024-08-14 06:56:08.045347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:40.872 [2024-08-14 06:56:08.045392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:40.872 [2024-08-14 06:56:08.045446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:40.872 [2024-08-14 06:56:08.047915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:40.872 [2024-08-14 06:56:08.048003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:40.872 pt2 00:24:40.872 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:40.872 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:40.872 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:24:40.872 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:24:40.872 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:40.872 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:40.872 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:40.872 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:40.872 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:41.131 malloc3 00:24:41.131 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:41.391 [2024-08-14 06:56:08.501160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:41.391 [2024-08-14 06:56:08.501325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:41.391 [2024-08-14 06:56:08.501368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:41.391 [2024-08-14 06:56:08.501396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:41.391 [2024-08-14 06:56:08.503667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:41.391 [2024-08-14 06:56:08.503742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:41.391 pt3 00:24:41.391 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:41.391 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:41.391 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:24:41.391 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:24:41.391 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:41.391 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:41.391 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:41.391 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:41.391 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:41.662 malloc4 00:24:41.662 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:41.942 [2024-08-14 06:56:08.945710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:41.942 [2024-08-14 06:56:08.945890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:41.942 [2024-08-14 06:56:08.945919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:41.942 [2024-08-14 06:56:08.945930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:41.942 [2024-08-14 06:56:08.948262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:41.942 [2024-08-14 06:56:08.948305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:41.942 pt4 00:24:41.942 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:41.942 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:41.942 06:56:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:41.942 [2024-08-14 06:56:09.157383] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:41.942 [2024-08-14 06:56:09.159429] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:41.942 [2024-08-14 06:56:09.159510] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:41.942 [2024-08-14 06:56:09.159557] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:41.942 [2024-08-14 06:56:09.159768] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:24:41.942 [2024-08-14 06:56:09.159782] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:41.942 [2024-08-14 06:56:09.160103] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:24:41.942 [2024-08-14 06:56:09.160680] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:24:41.942 [2024-08-14 06:56:09.160702] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:24:41.942 [2024-08-14 06:56:09.160865] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.942 06:56:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:41.942 06:56:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:41.942 06:56:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:41.942 06:56:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:41.942 06:56:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:41.942 06:56:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:41.942 06:56:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:41.942 06:56:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:41.942 06:56:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:41.942 06:56:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:41.942 06:56:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.942 06:56:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.204 06:56:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:42.204 "name": "raid_bdev1", 00:24:42.204 "uuid": "f149a3f0-f5e3-4592-9848-ead1c9a52dfb", 00:24:42.204 "strip_size_kb": 64, 00:24:42.204 "state": "online", 00:24:42.204 "raid_level": "raid5f", 00:24:42.204 "superblock": true, 00:24:42.204 "num_base_bdevs": 4, 00:24:42.204 "num_base_bdevs_discovered": 4, 00:24:42.204 "num_base_bdevs_operational": 4, 00:24:42.204 "base_bdevs_list": [ 00:24:42.204 { 00:24:42.204 "name": "pt1", 00:24:42.204 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:42.204 "is_configured": true, 00:24:42.204 "data_offset": 2048, 00:24:42.204 "data_size": 63488 00:24:42.204 }, 00:24:42.204 { 00:24:42.204 "name": "pt2", 00:24:42.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:42.204 "is_configured": true, 00:24:42.204 "data_offset": 2048, 00:24:42.204 "data_size": 63488 00:24:42.204 }, 00:24:42.204 { 00:24:42.204 "name": "pt3", 00:24:42.204 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:42.204 "is_configured": true, 00:24:42.204 "data_offset": 2048, 00:24:42.204 "data_size": 63488 00:24:42.204 }, 00:24:42.204 { 00:24:42.204 "name": "pt4", 00:24:42.204 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:42.204 "is_configured": true, 00:24:42.204 "data_offset": 2048, 00:24:42.204 "data_size": 63488 00:24:42.204 } 00:24:42.204 ] 00:24:42.204 }' 00:24:42.204 06:56:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:42.204 06:56:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.773 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:24:42.773 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:43.033 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:43.033 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:43.033 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:43.033 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:43.033 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:43.033 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:43.033 [2024-08-14 06:56:10.233152] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:43.033 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:43.033 "name": "raid_bdev1", 00:24:43.033 "aliases": [ 00:24:43.033 "f149a3f0-f5e3-4592-9848-ead1c9a52dfb" 00:24:43.033 ], 00:24:43.033 "product_name": "Raid Volume", 00:24:43.033 "block_size": 512, 00:24:43.033 "num_blocks": 190464, 00:24:43.033 "uuid": "f149a3f0-f5e3-4592-9848-ead1c9a52dfb", 00:24:43.033 "assigned_rate_limits": { 00:24:43.033 "rw_ios_per_sec": 0, 00:24:43.033 "rw_mbytes_per_sec": 0, 00:24:43.033 "r_mbytes_per_sec": 0, 00:24:43.033 "w_mbytes_per_sec": 0 00:24:43.033 }, 00:24:43.033 "claimed": false, 00:24:43.033 "zoned": false, 00:24:43.033 "supported_io_types": { 00:24:43.033 "read": true, 00:24:43.033 "write": true, 00:24:43.033 "unmap": false, 00:24:43.033 "flush": false, 00:24:43.033 "reset": true, 00:24:43.033 "nvme_admin": false, 00:24:43.033 "nvme_io": false, 00:24:43.033 "nvme_io_md": false, 00:24:43.033 "write_zeroes": true, 00:24:43.033 "zcopy": false, 00:24:43.033 "get_zone_info": false, 00:24:43.033 "zone_management": false, 00:24:43.033 "zone_append": false, 00:24:43.033 "compare": false, 00:24:43.033 "compare_and_write": false, 00:24:43.033 "abort": false, 00:24:43.033 "seek_hole": false, 00:24:43.033 "seek_data": false, 00:24:43.033 "copy": false, 00:24:43.033 "nvme_iov_md": false 00:24:43.033 }, 00:24:43.033 "driver_specific": { 00:24:43.033 "raid": { 00:24:43.033 "uuid": "f149a3f0-f5e3-4592-9848-ead1c9a52dfb", 00:24:43.034 "strip_size_kb": 64, 00:24:43.034 "state": "online", 00:24:43.034 "raid_level": "raid5f", 00:24:43.034 "superblock": true, 00:24:43.034 "num_base_bdevs": 4, 00:24:43.034 "num_base_bdevs_discovered": 4, 00:24:43.034 "num_base_bdevs_operational": 4, 00:24:43.034 "base_bdevs_list": [ 00:24:43.034 { 00:24:43.034 "name": "pt1", 00:24:43.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:43.034 "is_configured": true, 00:24:43.034 "data_offset": 2048, 00:24:43.034 "data_size": 63488 00:24:43.034 }, 00:24:43.034 { 00:24:43.034 "name": "pt2", 00:24:43.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:43.034 "is_configured": true, 00:24:43.034 "data_offset": 2048, 00:24:43.034 "data_size": 63488 00:24:43.034 }, 00:24:43.034 { 00:24:43.034 "name": "pt3", 00:24:43.034 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:43.034 "is_configured": true, 00:24:43.034 "data_offset": 2048, 00:24:43.034 "data_size": 63488 00:24:43.034 }, 00:24:43.034 { 00:24:43.034 "name": "pt4", 00:24:43.034 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:43.034 "is_configured": true, 00:24:43.034 "data_offset": 2048, 00:24:43.034 "data_size": 63488 00:24:43.034 } 00:24:43.034 ] 00:24:43.034 } 00:24:43.034 } 00:24:43.034 }' 00:24:43.034 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:43.034 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:43.034 pt2 00:24:43.034 pt3 00:24:43.034 pt4' 00:24:43.034 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:43.294 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:43.294 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:43.294 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:43.294 "name": "pt1", 00:24:43.294 "aliases": [ 00:24:43.294 "00000000-0000-0000-0000-000000000001" 00:24:43.294 ], 00:24:43.294 "product_name": "passthru", 00:24:43.294 "block_size": 512, 00:24:43.294 "num_blocks": 65536, 00:24:43.294 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:43.294 "assigned_rate_limits": { 00:24:43.294 "rw_ios_per_sec": 0, 00:24:43.294 "rw_mbytes_per_sec": 0, 00:24:43.294 "r_mbytes_per_sec": 0, 00:24:43.294 "w_mbytes_per_sec": 0 00:24:43.294 }, 00:24:43.294 "claimed": true, 00:24:43.294 "claim_type": "exclusive_write", 00:24:43.294 "zoned": false, 00:24:43.294 "supported_io_types": { 00:24:43.294 "read": true, 00:24:43.294 "write": true, 00:24:43.294 "unmap": true, 00:24:43.294 "flush": true, 00:24:43.294 "reset": true, 00:24:43.294 "nvme_admin": false, 00:24:43.294 "nvme_io": false, 00:24:43.294 "nvme_io_md": false, 00:24:43.294 "write_zeroes": true, 00:24:43.294 "zcopy": true, 00:24:43.294 "get_zone_info": false, 00:24:43.294 "zone_management": false, 00:24:43.294 "zone_append": false, 00:24:43.294 "compare": false, 00:24:43.294 "compare_and_write": false, 00:24:43.294 "abort": true, 00:24:43.294 "seek_hole": false, 00:24:43.294 "seek_data": false, 00:24:43.294 "copy": true, 00:24:43.294 "nvme_iov_md": false 00:24:43.294 }, 00:24:43.294 "memory_domains": [ 00:24:43.294 { 00:24:43.294 "dma_device_id": "system", 00:24:43.294 "dma_device_type": 1 00:24:43.294 }, 00:24:43.294 { 00:24:43.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.294 "dma_device_type": 2 00:24:43.294 } 00:24:43.294 ], 00:24:43.294 "driver_specific": { 00:24:43.294 "passthru": { 00:24:43.294 "name": "pt1", 00:24:43.294 "base_bdev_name": "malloc1" 00:24:43.294 } 00:24:43.294 } 00:24:43.294 }' 00:24:43.294 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:43.554 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:43.554 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:43.554 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:43.554 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:43.554 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:43.554 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:43.554 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:43.814 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:43.814 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:43.814 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:43.814 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:43.814 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:43.814 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:43.814 06:56:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:44.073 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:44.073 "name": "pt2", 00:24:44.073 "aliases": [ 00:24:44.073 "00000000-0000-0000-0000-000000000002" 00:24:44.073 ], 00:24:44.073 "product_name": "passthru", 00:24:44.073 "block_size": 512, 00:24:44.073 "num_blocks": 65536, 00:24:44.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:44.073 "assigned_rate_limits": { 00:24:44.073 "rw_ios_per_sec": 0, 00:24:44.073 "rw_mbytes_per_sec": 0, 00:24:44.073 "r_mbytes_per_sec": 0, 00:24:44.073 "w_mbytes_per_sec": 0 00:24:44.073 }, 00:24:44.073 "claimed": true, 00:24:44.073 "claim_type": "exclusive_write", 00:24:44.073 "zoned": false, 00:24:44.073 "supported_io_types": { 00:24:44.073 "read": true, 00:24:44.073 "write": true, 00:24:44.073 "unmap": true, 00:24:44.073 "flush": true, 00:24:44.073 "reset": true, 00:24:44.073 "nvme_admin": false, 00:24:44.073 "nvme_io": false, 00:24:44.073 "nvme_io_md": false, 00:24:44.073 "write_zeroes": true, 00:24:44.073 "zcopy": true, 00:24:44.073 "get_zone_info": false, 00:24:44.073 "zone_management": false, 00:24:44.073 "zone_append": false, 00:24:44.073 "compare": false, 00:24:44.073 "compare_and_write": false, 00:24:44.073 "abort": true, 00:24:44.073 "seek_hole": false, 00:24:44.073 "seek_data": false, 00:24:44.073 "copy": true, 00:24:44.073 "nvme_iov_md": false 00:24:44.073 }, 00:24:44.073 "memory_domains": [ 00:24:44.073 { 00:24:44.073 "dma_device_id": "system", 00:24:44.073 "dma_device_type": 1 00:24:44.073 }, 00:24:44.073 { 00:24:44.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:44.073 "dma_device_type": 2 00:24:44.073 } 00:24:44.073 ], 00:24:44.073 "driver_specific": { 00:24:44.073 "passthru": { 00:24:44.073 "name": "pt2", 00:24:44.073 "base_bdev_name": "malloc2" 00:24:44.073 } 00:24:44.073 } 00:24:44.073 }' 00:24:44.073 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:44.073 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:44.073 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:44.073 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:44.073 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:44.333 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:44.333 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:44.333 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:44.333 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:44.333 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:44.333 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:44.333 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:44.333 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:44.333 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:44.333 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:44.593 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:44.593 "name": "pt3", 00:24:44.593 "aliases": [ 00:24:44.593 "00000000-0000-0000-0000-000000000003" 00:24:44.593 ], 00:24:44.593 "product_name": "passthru", 00:24:44.593 "block_size": 512, 00:24:44.593 "num_blocks": 65536, 00:24:44.593 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:44.593 "assigned_rate_limits": { 00:24:44.593 "rw_ios_per_sec": 0, 00:24:44.593 "rw_mbytes_per_sec": 0, 00:24:44.593 "r_mbytes_per_sec": 0, 00:24:44.593 "w_mbytes_per_sec": 0 00:24:44.593 }, 00:24:44.593 "claimed": true, 00:24:44.593 "claim_type": "exclusive_write", 00:24:44.593 "zoned": false, 00:24:44.593 "supported_io_types": { 00:24:44.593 "read": true, 00:24:44.593 "write": true, 00:24:44.593 "unmap": true, 00:24:44.593 "flush": true, 00:24:44.593 "reset": true, 00:24:44.593 "nvme_admin": false, 00:24:44.593 "nvme_io": false, 00:24:44.593 "nvme_io_md": false, 00:24:44.593 "write_zeroes": true, 00:24:44.593 "zcopy": true, 00:24:44.593 "get_zone_info": false, 00:24:44.593 "zone_management": false, 00:24:44.593 "zone_append": false, 00:24:44.593 "compare": false, 00:24:44.593 "compare_and_write": false, 00:24:44.593 "abort": true, 00:24:44.593 "seek_hole": false, 00:24:44.593 "seek_data": false, 00:24:44.593 "copy": true, 00:24:44.593 "nvme_iov_md": false 00:24:44.593 }, 00:24:44.593 "memory_domains": [ 00:24:44.593 { 00:24:44.593 "dma_device_id": "system", 00:24:44.593 "dma_device_type": 1 00:24:44.593 }, 00:24:44.593 { 00:24:44.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:44.593 "dma_device_type": 2 00:24:44.593 } 00:24:44.593 ], 00:24:44.593 "driver_specific": { 00:24:44.593 "passthru": { 00:24:44.593 "name": "pt3", 00:24:44.593 "base_bdev_name": "malloc3" 00:24:44.593 } 00:24:44.593 } 00:24:44.593 }' 00:24:44.593 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:44.593 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:44.853 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:44.853 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:44.853 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:44.853 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:44.853 06:56:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:44.853 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:44.853 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:44.853 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:45.111 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:45.111 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:45.111 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:45.111 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:45.111 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:45.371 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:45.371 "name": "pt4", 00:24:45.371 "aliases": [ 00:24:45.371 "00000000-0000-0000-0000-000000000004" 00:24:45.371 ], 00:24:45.371 "product_name": "passthru", 00:24:45.371 "block_size": 512, 00:24:45.371 "num_blocks": 65536, 00:24:45.371 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:45.371 "assigned_rate_limits": { 00:24:45.371 "rw_ios_per_sec": 0, 00:24:45.371 "rw_mbytes_per_sec": 0, 00:24:45.371 "r_mbytes_per_sec": 0, 00:24:45.371 "w_mbytes_per_sec": 0 00:24:45.371 }, 00:24:45.371 "claimed": true, 00:24:45.371 "claim_type": "exclusive_write", 00:24:45.371 "zoned": false, 00:24:45.371 "supported_io_types": { 00:24:45.371 "read": true, 00:24:45.371 "write": true, 00:24:45.371 "unmap": true, 00:24:45.371 "flush": true, 00:24:45.371 "reset": true, 00:24:45.371 "nvme_admin": false, 00:24:45.371 "nvme_io": false, 00:24:45.371 "nvme_io_md": false, 00:24:45.371 "write_zeroes": true, 00:24:45.371 "zcopy": true, 00:24:45.371 "get_zone_info": false, 00:24:45.371 "zone_management": false, 00:24:45.371 "zone_append": false, 00:24:45.371 "compare": false, 00:24:45.371 "compare_and_write": false, 00:24:45.371 "abort": true, 00:24:45.371 "seek_hole": false, 00:24:45.371 "seek_data": false, 00:24:45.371 "copy": true, 00:24:45.371 "nvme_iov_md": false 00:24:45.371 }, 00:24:45.371 "memory_domains": [ 00:24:45.371 { 00:24:45.371 "dma_device_id": "system", 00:24:45.371 "dma_device_type": 1 00:24:45.371 }, 00:24:45.371 { 00:24:45.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.371 "dma_device_type": 2 00:24:45.371 } 00:24:45.371 ], 00:24:45.371 "driver_specific": { 00:24:45.371 "passthru": { 00:24:45.371 "name": "pt4", 00:24:45.371 "base_bdev_name": "malloc4" 00:24:45.371 } 00:24:45.371 } 00:24:45.371 }' 00:24:45.371 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:45.371 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:45.371 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:45.371 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:45.371 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:45.371 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:45.371 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:45.630 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:45.630 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:45.630 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:45.630 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:45.630 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:45.630 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:24:45.630 06:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:45.889 [2024-08-14 06:56:13.056566] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:45.889 06:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=f149a3f0-f5e3-4592-9848-ead1c9a52dfb 00:24:45.889 06:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z f149a3f0-f5e3-4592-9848-ead1c9a52dfb ']' 00:24:45.889 06:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:46.148 [2024-08-14 06:56:13.303949] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:46.148 [2024-08-14 06:56:13.304107] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:46.148 [2024-08-14 06:56:13.304286] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:46.148 [2024-08-14 06:56:13.304392] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:46.148 [2024-08-14 06:56:13.304415] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:24:46.148 06:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.148 06:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:24:46.408 06:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:24:46.408 06:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:24:46.408 06:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:46.408 06:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:46.667 06:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:46.667 06:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:46.926 06:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:46.926 06:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:47.185 06:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:47.185 06:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:47.443 06:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:47.443 06:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:47.702 06:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:24:47.702 06:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:47.702 06:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:24:47.702 06:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:47.702 06:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:47.702 06:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:47.702 06:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:47.702 06:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:47.702 06:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:47.702 06:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:47.702 06:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:47.702 06:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:47.702 06:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:47.961 [2024-08-14 06:56:15.077024] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:47.961 [2024-08-14 06:56:15.079219] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:47.961 [2024-08-14 06:56:15.079276] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:47.961 [2024-08-14 06:56:15.079315] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:47.961 [2024-08-14 06:56:15.079367] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:47.962 [2024-08-14 06:56:15.079445] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:47.962 [2024-08-14 06:56:15.079469] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:24:47.962 [2024-08-14 06:56:15.079491] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:24:47.962 [2024-08-14 06:56:15.079506] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:47.962 [2024-08-14 06:56:15.079519] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:24:47.962 request: 00:24:47.962 { 00:24:47.962 "name": "raid_bdev1", 00:24:47.962 "raid_level": "raid5f", 00:24:47.962 "base_bdevs": [ 00:24:47.962 "malloc1", 00:24:47.962 "malloc2", 00:24:47.962 "malloc3", 00:24:47.962 "malloc4" 00:24:47.962 ], 00:24:47.962 "strip_size_kb": 64, 00:24:47.962 "superblock": false, 00:24:47.962 "method": "bdev_raid_create", 00:24:47.962 "req_id": 1 00:24:47.962 } 00:24:47.962 Got JSON-RPC error response 00:24:47.962 response: 00:24:47.962 { 00:24:47.962 "code": -17, 00:24:47.962 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:47.962 } 00:24:47.962 06:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:24:47.962 06:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:24:47.962 06:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:24:47.962 06:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:24:47.962 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:24:47.962 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.221 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:24:48.221 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:24:48.221 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:48.480 [2024-08-14 06:56:15.564201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:48.480 [2024-08-14 06:56:15.564276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:48.480 [2024-08-14 06:56:15.564297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:48.480 [2024-08-14 06:56:15.564312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:48.480 [2024-08-14 06:56:15.566741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:48.480 [2024-08-14 06:56:15.566786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:48.480 [2024-08-14 06:56:15.566892] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:48.480 [2024-08-14 06:56:15.566937] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:48.480 pt1 00:24:48.480 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:48.480 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:48.480 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:48.480 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:48.480 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:48.480 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:48.480 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:48.480 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:48.480 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:48.480 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:48.480 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.480 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.740 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:48.740 "name": "raid_bdev1", 00:24:48.740 "uuid": "f149a3f0-f5e3-4592-9848-ead1c9a52dfb", 00:24:48.740 "strip_size_kb": 64, 00:24:48.740 "state": "configuring", 00:24:48.740 "raid_level": "raid5f", 00:24:48.740 "superblock": true, 00:24:48.740 "num_base_bdevs": 4, 00:24:48.740 "num_base_bdevs_discovered": 1, 00:24:48.740 "num_base_bdevs_operational": 4, 00:24:48.740 "base_bdevs_list": [ 00:24:48.740 { 00:24:48.740 "name": "pt1", 00:24:48.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:48.740 "is_configured": true, 00:24:48.740 "data_offset": 2048, 00:24:48.740 "data_size": 63488 00:24:48.740 }, 00:24:48.740 { 00:24:48.740 "name": null, 00:24:48.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:48.740 "is_configured": false, 00:24:48.740 "data_offset": 2048, 00:24:48.740 "data_size": 63488 00:24:48.740 }, 00:24:48.740 { 00:24:48.740 "name": null, 00:24:48.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:48.740 "is_configured": false, 00:24:48.740 "data_offset": 2048, 00:24:48.740 "data_size": 63488 00:24:48.740 }, 00:24:48.740 { 00:24:48.740 "name": null, 00:24:48.740 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:48.740 "is_configured": false, 00:24:48.740 "data_offset": 2048, 00:24:48.740 "data_size": 63488 00:24:48.740 } 00:24:48.740 ] 00:24:48.740 }' 00:24:48.740 06:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:48.740 06:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.309 06:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:24:49.309 06:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:49.569 [2024-08-14 06:56:16.746277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:49.569 [2024-08-14 06:56:16.746429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.569 [2024-08-14 06:56:16.746470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:24:49.569 [2024-08-14 06:56:16.746505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.569 [2024-08-14 06:56:16.746993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.569 [2024-08-14 06:56:16.747060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:49.569 [2024-08-14 06:56:16.747181] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:49.569 [2024-08-14 06:56:16.747242] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:49.569 pt2 00:24:49.569 06:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:49.828 [2024-08-14 06:56:16.981923] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:49.828 06:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:49.828 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:49.828 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:49.828 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:49.828 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:49.828 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:49.828 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:49.828 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:49.828 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:49.828 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:49.828 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.828 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.088 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:50.088 "name": "raid_bdev1", 00:24:50.088 "uuid": "f149a3f0-f5e3-4592-9848-ead1c9a52dfb", 00:24:50.088 "strip_size_kb": 64, 00:24:50.088 "state": "configuring", 00:24:50.088 "raid_level": "raid5f", 00:24:50.088 "superblock": true, 00:24:50.088 "num_base_bdevs": 4, 00:24:50.088 "num_base_bdevs_discovered": 1, 00:24:50.088 "num_base_bdevs_operational": 4, 00:24:50.088 "base_bdevs_list": [ 00:24:50.088 { 00:24:50.088 "name": "pt1", 00:24:50.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:50.088 "is_configured": true, 00:24:50.088 "data_offset": 2048, 00:24:50.088 "data_size": 63488 00:24:50.088 }, 00:24:50.088 { 00:24:50.088 "name": null, 00:24:50.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:50.088 "is_configured": false, 00:24:50.088 "data_offset": 2048, 00:24:50.088 "data_size": 63488 00:24:50.088 }, 00:24:50.088 { 00:24:50.088 "name": null, 00:24:50.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:50.088 "is_configured": false, 00:24:50.088 "data_offset": 2048, 00:24:50.088 "data_size": 63488 00:24:50.088 }, 00:24:50.088 { 00:24:50.088 "name": null, 00:24:50.088 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:50.088 "is_configured": false, 00:24:50.088 "data_offset": 2048, 00:24:50.088 "data_size": 63488 00:24:50.088 } 00:24:50.088 ] 00:24:50.088 }' 00:24:50.088 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:50.088 06:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.027 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:24:51.027 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:51.027 06:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:51.027 [2024-08-14 06:56:18.136024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:51.027 [2024-08-14 06:56:18.136108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.027 [2024-08-14 06:56:18.136135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:51.027 [2024-08-14 06:56:18.136145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.027 [2024-08-14 06:56:18.136672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.027 [2024-08-14 06:56:18.136703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:51.027 [2024-08-14 06:56:18.136791] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:51.027 [2024-08-14 06:56:18.136814] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:51.027 pt2 00:24:51.027 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:24:51.027 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:51.027 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:51.287 [2024-08-14 06:56:18.363624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:51.287 [2024-08-14 06:56:18.363712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.287 [2024-08-14 06:56:18.363738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:51.287 [2024-08-14 06:56:18.363756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.287 [2024-08-14 06:56:18.364189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.287 [2024-08-14 06:56:18.364210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:51.287 [2024-08-14 06:56:18.364327] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:51.287 [2024-08-14 06:56:18.364350] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:51.287 pt3 00:24:51.287 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:24:51.287 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:51.287 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:51.546 [2024-08-14 06:56:18.607284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:51.546 [2024-08-14 06:56:18.607355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.546 [2024-08-14 06:56:18.607433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:24:51.546 [2024-08-14 06:56:18.607443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.546 [2024-08-14 06:56:18.607857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.546 [2024-08-14 06:56:18.607877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:51.546 [2024-08-14 06:56:18.607956] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:51.546 [2024-08-14 06:56:18.607977] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:51.546 [2024-08-14 06:56:18.608118] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:24:51.546 [2024-08-14 06:56:18.608126] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:51.546 [2024-08-14 06:56:18.608484] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:24:51.546 [2024-08-14 06:56:18.608983] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:24:51.546 [2024-08-14 06:56:18.609049] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:24:51.546 [2024-08-14 06:56:18.609206] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.546 pt4 00:24:51.546 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:24:51.546 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:51.547 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:51.547 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:51.547 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:51.547 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:51.547 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:51.547 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:51.547 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:51.547 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:51.547 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:51.547 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:51.547 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.547 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.806 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:51.806 "name": "raid_bdev1", 00:24:51.806 "uuid": "f149a3f0-f5e3-4592-9848-ead1c9a52dfb", 00:24:51.806 "strip_size_kb": 64, 00:24:51.806 "state": "online", 00:24:51.806 "raid_level": "raid5f", 00:24:51.806 "superblock": true, 00:24:51.806 "num_base_bdevs": 4, 00:24:51.806 "num_base_bdevs_discovered": 4, 00:24:51.806 "num_base_bdevs_operational": 4, 00:24:51.806 "base_bdevs_list": [ 00:24:51.806 { 00:24:51.806 "name": "pt1", 00:24:51.806 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:51.806 "is_configured": true, 00:24:51.806 "data_offset": 2048, 00:24:51.806 "data_size": 63488 00:24:51.806 }, 00:24:51.806 { 00:24:51.806 "name": "pt2", 00:24:51.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:51.806 "is_configured": true, 00:24:51.806 "data_offset": 2048, 00:24:51.806 "data_size": 63488 00:24:51.806 }, 00:24:51.806 { 00:24:51.806 "name": "pt3", 00:24:51.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:51.806 "is_configured": true, 00:24:51.806 "data_offset": 2048, 00:24:51.806 "data_size": 63488 00:24:51.806 }, 00:24:51.806 { 00:24:51.806 "name": "pt4", 00:24:51.806 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:51.806 "is_configured": true, 00:24:51.806 "data_offset": 2048, 00:24:51.806 "data_size": 63488 00:24:51.806 } 00:24:51.806 ] 00:24:51.806 }' 00:24:51.806 06:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:51.806 06:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.375 06:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:24:52.375 06:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:52.375 06:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:52.375 06:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:52.375 06:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:52.375 06:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:52.375 06:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:52.375 06:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:52.634 [2024-08-14 06:56:19.737633] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:52.634 06:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:52.634 "name": "raid_bdev1", 00:24:52.634 "aliases": [ 00:24:52.634 "f149a3f0-f5e3-4592-9848-ead1c9a52dfb" 00:24:52.634 ], 00:24:52.634 "product_name": "Raid Volume", 00:24:52.634 "block_size": 512, 00:24:52.634 "num_blocks": 190464, 00:24:52.634 "uuid": "f149a3f0-f5e3-4592-9848-ead1c9a52dfb", 00:24:52.634 "assigned_rate_limits": { 00:24:52.634 "rw_ios_per_sec": 0, 00:24:52.634 "rw_mbytes_per_sec": 0, 00:24:52.634 "r_mbytes_per_sec": 0, 00:24:52.634 "w_mbytes_per_sec": 0 00:24:52.634 }, 00:24:52.634 "claimed": false, 00:24:52.634 "zoned": false, 00:24:52.634 "supported_io_types": { 00:24:52.634 "read": true, 00:24:52.634 "write": true, 00:24:52.634 "unmap": false, 00:24:52.634 "flush": false, 00:24:52.634 "reset": true, 00:24:52.634 "nvme_admin": false, 00:24:52.634 "nvme_io": false, 00:24:52.634 "nvme_io_md": false, 00:24:52.634 "write_zeroes": true, 00:24:52.634 "zcopy": false, 00:24:52.634 "get_zone_info": false, 00:24:52.634 "zone_management": false, 00:24:52.634 "zone_append": false, 00:24:52.634 "compare": false, 00:24:52.634 "compare_and_write": false, 00:24:52.634 "abort": false, 00:24:52.634 "seek_hole": false, 00:24:52.634 "seek_data": false, 00:24:52.634 "copy": false, 00:24:52.634 "nvme_iov_md": false 00:24:52.634 }, 00:24:52.634 "driver_specific": { 00:24:52.634 "raid": { 00:24:52.635 "uuid": "f149a3f0-f5e3-4592-9848-ead1c9a52dfb", 00:24:52.635 "strip_size_kb": 64, 00:24:52.635 "state": "online", 00:24:52.635 "raid_level": "raid5f", 00:24:52.635 "superblock": true, 00:24:52.635 "num_base_bdevs": 4, 00:24:52.635 "num_base_bdevs_discovered": 4, 00:24:52.635 "num_base_bdevs_operational": 4, 00:24:52.635 "base_bdevs_list": [ 00:24:52.635 { 00:24:52.635 "name": "pt1", 00:24:52.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:52.635 "is_configured": true, 00:24:52.635 "data_offset": 2048, 00:24:52.635 "data_size": 63488 00:24:52.635 }, 00:24:52.635 { 00:24:52.635 "name": "pt2", 00:24:52.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:52.635 "is_configured": true, 00:24:52.635 "data_offset": 2048, 00:24:52.635 "data_size": 63488 00:24:52.635 }, 00:24:52.635 { 00:24:52.635 "name": "pt3", 00:24:52.635 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:52.635 "is_configured": true, 00:24:52.635 "data_offset": 2048, 00:24:52.635 "data_size": 63488 00:24:52.635 }, 00:24:52.635 { 00:24:52.635 "name": "pt4", 00:24:52.635 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:52.635 "is_configured": true, 00:24:52.635 "data_offset": 2048, 00:24:52.635 "data_size": 63488 00:24:52.635 } 00:24:52.635 ] 00:24:52.635 } 00:24:52.635 } 00:24:52.635 }' 00:24:52.635 06:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:52.635 06:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:52.635 pt2 00:24:52.635 pt3 00:24:52.635 pt4' 00:24:52.635 06:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:52.635 06:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:52.635 06:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:52.894 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:52.894 "name": "pt1", 00:24:52.894 "aliases": [ 00:24:52.894 "00000000-0000-0000-0000-000000000001" 00:24:52.894 ], 00:24:52.894 "product_name": "passthru", 00:24:52.894 "block_size": 512, 00:24:52.894 "num_blocks": 65536, 00:24:52.894 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:52.894 "assigned_rate_limits": { 00:24:52.894 "rw_ios_per_sec": 0, 00:24:52.894 "rw_mbytes_per_sec": 0, 00:24:52.894 "r_mbytes_per_sec": 0, 00:24:52.894 "w_mbytes_per_sec": 0 00:24:52.894 }, 00:24:52.894 "claimed": true, 00:24:52.894 "claim_type": "exclusive_write", 00:24:52.894 "zoned": false, 00:24:52.894 "supported_io_types": { 00:24:52.894 "read": true, 00:24:52.894 "write": true, 00:24:52.894 "unmap": true, 00:24:52.894 "flush": true, 00:24:52.894 "reset": true, 00:24:52.894 "nvme_admin": false, 00:24:52.894 "nvme_io": false, 00:24:52.894 "nvme_io_md": false, 00:24:52.894 "write_zeroes": true, 00:24:52.894 "zcopy": true, 00:24:52.894 "get_zone_info": false, 00:24:52.894 "zone_management": false, 00:24:52.894 "zone_append": false, 00:24:52.894 "compare": false, 00:24:52.894 "compare_and_write": false, 00:24:52.894 "abort": true, 00:24:52.894 "seek_hole": false, 00:24:52.894 "seek_data": false, 00:24:52.894 "copy": true, 00:24:52.894 "nvme_iov_md": false 00:24:52.894 }, 00:24:52.894 "memory_domains": [ 00:24:52.894 { 00:24:52.894 "dma_device_id": "system", 00:24:52.894 "dma_device_type": 1 00:24:52.894 }, 00:24:52.894 { 00:24:52.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.894 "dma_device_type": 2 00:24:52.894 } 00:24:52.894 ], 00:24:52.894 "driver_specific": { 00:24:52.894 "passthru": { 00:24:52.894 "name": "pt1", 00:24:52.894 "base_bdev_name": "malloc1" 00:24:52.894 } 00:24:52.894 } 00:24:52.894 }' 00:24:52.894 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:52.894 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:52.894 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:52.894 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:53.153 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:53.153 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:53.153 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:53.153 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:53.153 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:53.153 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:53.153 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:53.153 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:53.153 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:53.412 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:53.412 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:53.412 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:53.412 "name": "pt2", 00:24:53.412 "aliases": [ 00:24:53.412 "00000000-0000-0000-0000-000000000002" 00:24:53.412 ], 00:24:53.412 "product_name": "passthru", 00:24:53.412 "block_size": 512, 00:24:53.412 "num_blocks": 65536, 00:24:53.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:53.412 "assigned_rate_limits": { 00:24:53.412 "rw_ios_per_sec": 0, 00:24:53.412 "rw_mbytes_per_sec": 0, 00:24:53.412 "r_mbytes_per_sec": 0, 00:24:53.412 "w_mbytes_per_sec": 0 00:24:53.412 }, 00:24:53.412 "claimed": true, 00:24:53.412 "claim_type": "exclusive_write", 00:24:53.412 "zoned": false, 00:24:53.412 "supported_io_types": { 00:24:53.412 "read": true, 00:24:53.412 "write": true, 00:24:53.412 "unmap": true, 00:24:53.412 "flush": true, 00:24:53.412 "reset": true, 00:24:53.412 "nvme_admin": false, 00:24:53.412 "nvme_io": false, 00:24:53.412 "nvme_io_md": false, 00:24:53.412 "write_zeroes": true, 00:24:53.412 "zcopy": true, 00:24:53.412 "get_zone_info": false, 00:24:53.412 "zone_management": false, 00:24:53.412 "zone_append": false, 00:24:53.412 "compare": false, 00:24:53.412 "compare_and_write": false, 00:24:53.412 "abort": true, 00:24:53.412 "seek_hole": false, 00:24:53.412 "seek_data": false, 00:24:53.412 "copy": true, 00:24:53.412 "nvme_iov_md": false 00:24:53.412 }, 00:24:53.412 "memory_domains": [ 00:24:53.412 { 00:24:53.412 "dma_device_id": "system", 00:24:53.412 "dma_device_type": 1 00:24:53.412 }, 00:24:53.412 { 00:24:53.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:53.412 "dma_device_type": 2 00:24:53.412 } 00:24:53.412 ], 00:24:53.412 "driver_specific": { 00:24:53.412 "passthru": { 00:24:53.412 "name": "pt2", 00:24:53.412 "base_bdev_name": "malloc2" 00:24:53.412 } 00:24:53.412 } 00:24:53.412 }' 00:24:53.412 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:53.671 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:53.671 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:53.671 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:53.671 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:53.671 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:53.671 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:53.671 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:53.671 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:53.671 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:53.931 06:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:53.931 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:53.931 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:53.931 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:53.931 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:54.189 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:54.189 "name": "pt3", 00:24:54.189 "aliases": [ 00:24:54.189 "00000000-0000-0000-0000-000000000003" 00:24:54.189 ], 00:24:54.189 "product_name": "passthru", 00:24:54.189 "block_size": 512, 00:24:54.189 "num_blocks": 65536, 00:24:54.189 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:54.189 "assigned_rate_limits": { 00:24:54.189 "rw_ios_per_sec": 0, 00:24:54.189 "rw_mbytes_per_sec": 0, 00:24:54.189 "r_mbytes_per_sec": 0, 00:24:54.189 "w_mbytes_per_sec": 0 00:24:54.189 }, 00:24:54.189 "claimed": true, 00:24:54.189 "claim_type": "exclusive_write", 00:24:54.189 "zoned": false, 00:24:54.189 "supported_io_types": { 00:24:54.189 "read": true, 00:24:54.189 "write": true, 00:24:54.189 "unmap": true, 00:24:54.189 "flush": true, 00:24:54.189 "reset": true, 00:24:54.189 "nvme_admin": false, 00:24:54.189 "nvme_io": false, 00:24:54.190 "nvme_io_md": false, 00:24:54.190 "write_zeroes": true, 00:24:54.190 "zcopy": true, 00:24:54.190 "get_zone_info": false, 00:24:54.190 "zone_management": false, 00:24:54.190 "zone_append": false, 00:24:54.190 "compare": false, 00:24:54.190 "compare_and_write": false, 00:24:54.190 "abort": true, 00:24:54.190 "seek_hole": false, 00:24:54.190 "seek_data": false, 00:24:54.190 "copy": true, 00:24:54.190 "nvme_iov_md": false 00:24:54.190 }, 00:24:54.190 "memory_domains": [ 00:24:54.190 { 00:24:54.190 "dma_device_id": "system", 00:24:54.190 "dma_device_type": 1 00:24:54.190 }, 00:24:54.190 { 00:24:54.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:54.190 "dma_device_type": 2 00:24:54.190 } 00:24:54.190 ], 00:24:54.190 "driver_specific": { 00:24:54.190 "passthru": { 00:24:54.190 "name": "pt3", 00:24:54.190 "base_bdev_name": "malloc3" 00:24:54.190 } 00:24:54.190 } 00:24:54.190 }' 00:24:54.190 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:54.190 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:54.190 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:54.190 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:54.190 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:54.190 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:54.190 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:54.447 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:54.447 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:54.447 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:54.447 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:54.447 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:54.447 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:54.447 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:54.447 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:54.705 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:54.705 "name": "pt4", 00:24:54.705 "aliases": [ 00:24:54.705 "00000000-0000-0000-0000-000000000004" 00:24:54.705 ], 00:24:54.705 "product_name": "passthru", 00:24:54.705 "block_size": 512, 00:24:54.705 "num_blocks": 65536, 00:24:54.705 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:54.705 "assigned_rate_limits": { 00:24:54.705 "rw_ios_per_sec": 0, 00:24:54.705 "rw_mbytes_per_sec": 0, 00:24:54.705 "r_mbytes_per_sec": 0, 00:24:54.705 "w_mbytes_per_sec": 0 00:24:54.705 }, 00:24:54.705 "claimed": true, 00:24:54.705 "claim_type": "exclusive_write", 00:24:54.705 "zoned": false, 00:24:54.705 "supported_io_types": { 00:24:54.705 "read": true, 00:24:54.705 "write": true, 00:24:54.705 "unmap": true, 00:24:54.705 "flush": true, 00:24:54.705 "reset": true, 00:24:54.705 "nvme_admin": false, 00:24:54.705 "nvme_io": false, 00:24:54.705 "nvme_io_md": false, 00:24:54.705 "write_zeroes": true, 00:24:54.705 "zcopy": true, 00:24:54.705 "get_zone_info": false, 00:24:54.705 "zone_management": false, 00:24:54.705 "zone_append": false, 00:24:54.705 "compare": false, 00:24:54.705 "compare_and_write": false, 00:24:54.705 "abort": true, 00:24:54.705 "seek_hole": false, 00:24:54.705 "seek_data": false, 00:24:54.705 "copy": true, 00:24:54.705 "nvme_iov_md": false 00:24:54.705 }, 00:24:54.705 "memory_domains": [ 00:24:54.705 { 00:24:54.705 "dma_device_id": "system", 00:24:54.705 "dma_device_type": 1 00:24:54.705 }, 00:24:54.705 { 00:24:54.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:54.705 "dma_device_type": 2 00:24:54.705 } 00:24:54.705 ], 00:24:54.705 "driver_specific": { 00:24:54.705 "passthru": { 00:24:54.705 "name": "pt4", 00:24:54.705 "base_bdev_name": "malloc4" 00:24:54.705 } 00:24:54.705 } 00:24:54.705 }' 00:24:54.705 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:54.705 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:54.963 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:54.963 06:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:54.963 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:54.963 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:54.963 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:54.963 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:54.963 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:54.963 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:55.221 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:55.221 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:55.221 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:55.221 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:24:55.221 [2024-08-14 06:56:22.469331] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:55.478 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' f149a3f0-f5e3-4592-9848-ead1c9a52dfb '!=' f149a3f0-f5e3-4592-9848-ead1c9a52dfb ']' 00:24:55.478 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid5f 00:24:55.478 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:55.478 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:24:55.478 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:55.478 [2024-08-14 06:56:22.712742] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:55.736 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:55.736 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:55.736 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:55.736 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:55.736 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:55.736 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:55.736 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:55.736 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:55.736 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:55.736 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:55.736 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.736 06:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.994 06:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:55.994 "name": "raid_bdev1", 00:24:55.994 "uuid": "f149a3f0-f5e3-4592-9848-ead1c9a52dfb", 00:24:55.994 "strip_size_kb": 64, 00:24:55.994 "state": "online", 00:24:55.994 "raid_level": "raid5f", 00:24:55.994 "superblock": true, 00:24:55.994 "num_base_bdevs": 4, 00:24:55.994 "num_base_bdevs_discovered": 3, 00:24:55.994 "num_base_bdevs_operational": 3, 00:24:55.994 "base_bdevs_list": [ 00:24:55.994 { 00:24:55.994 "name": null, 00:24:55.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.994 "is_configured": false, 00:24:55.994 "data_offset": 2048, 00:24:55.994 "data_size": 63488 00:24:55.994 }, 00:24:55.994 { 00:24:55.994 "name": "pt2", 00:24:55.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:55.994 "is_configured": true, 00:24:55.994 "data_offset": 2048, 00:24:55.994 "data_size": 63488 00:24:55.994 }, 00:24:55.994 { 00:24:55.994 "name": "pt3", 00:24:55.994 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:55.994 "is_configured": true, 00:24:55.994 "data_offset": 2048, 00:24:55.994 "data_size": 63488 00:24:55.994 }, 00:24:55.994 { 00:24:55.994 "name": "pt4", 00:24:55.994 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:55.994 "is_configured": true, 00:24:55.994 "data_offset": 2048, 00:24:55.994 "data_size": 63488 00:24:55.994 } 00:24:55.994 ] 00:24:55.994 }' 00:24:55.994 06:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:55.994 06:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.559 06:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:56.816 [2024-08-14 06:56:23.882691] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:56.817 [2024-08-14 06:56:23.882745] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:56.817 [2024-08-14 06:56:23.882834] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:56.817 [2024-08-14 06:56:23.882908] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:56.817 [2024-08-14 06:56:23.882917] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:24:56.817 06:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.817 06:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:24:57.108 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:24:57.108 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:24:57.108 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:57.108 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:24:57.108 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:57.375 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:57.375 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:24:57.375 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:57.375 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:57.375 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:24:57.375 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:57.634 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:57.634 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:24:57.634 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:24:57.634 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:24:57.634 06:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:57.892 [2024-08-14 06:56:25.080665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:57.892 [2024-08-14 06:56:25.080741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:57.892 [2024-08-14 06:56:25.080765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:57.892 [2024-08-14 06:56:25.080775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:57.892 [2024-08-14 06:56:25.083243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:57.892 [2024-08-14 06:56:25.083331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:57.892 [2024-08-14 06:56:25.083451] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:57.892 [2024-08-14 06:56:25.083501] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:57.892 pt2 00:24:57.892 06:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:57.892 06:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:57.892 06:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:57.892 06:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:57.892 06:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:57.892 06:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:57.892 06:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:57.892 06:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:57.892 06:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:57.892 06:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:57.892 06:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.892 06:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.150 06:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:58.150 "name": "raid_bdev1", 00:24:58.150 "uuid": "f149a3f0-f5e3-4592-9848-ead1c9a52dfb", 00:24:58.150 "strip_size_kb": 64, 00:24:58.150 "state": "configuring", 00:24:58.150 "raid_level": "raid5f", 00:24:58.150 "superblock": true, 00:24:58.150 "num_base_bdevs": 4, 00:24:58.150 "num_base_bdevs_discovered": 1, 00:24:58.150 "num_base_bdevs_operational": 3, 00:24:58.150 "base_bdevs_list": [ 00:24:58.150 { 00:24:58.150 "name": null, 00:24:58.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.150 "is_configured": false, 00:24:58.150 "data_offset": 2048, 00:24:58.150 "data_size": 63488 00:24:58.150 }, 00:24:58.150 { 00:24:58.150 "name": "pt2", 00:24:58.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:58.150 "is_configured": true, 00:24:58.150 "data_offset": 2048, 00:24:58.150 "data_size": 63488 00:24:58.150 }, 00:24:58.150 { 00:24:58.150 "name": null, 00:24:58.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:58.150 "is_configured": false, 00:24:58.150 "data_offset": 2048, 00:24:58.150 "data_size": 63488 00:24:58.150 }, 00:24:58.150 { 00:24:58.150 "name": null, 00:24:58.150 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:58.150 "is_configured": false, 00:24:58.150 "data_offset": 2048, 00:24:58.150 "data_size": 63488 00:24:58.150 } 00:24:58.150 ] 00:24:58.150 }' 00:24:58.150 06:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:58.150 06:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.085 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:24:59.085 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:24:59.085 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:59.085 [2024-08-14 06:56:26.262714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:59.085 [2024-08-14 06:56:26.262887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.085 [2024-08-14 06:56:26.262944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:59.085 [2024-08-14 06:56:26.263011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.085 [2024-08-14 06:56:26.263507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.085 [2024-08-14 06:56:26.263570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:59.085 [2024-08-14 06:56:26.263707] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:59.085 [2024-08-14 06:56:26.263763] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:59.085 pt3 00:24:59.085 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:59.085 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:59.085 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:59.085 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:59.085 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:59.085 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:59.085 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:59.085 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:59.085 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:59.085 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:59.085 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.085 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.343 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:59.343 "name": "raid_bdev1", 00:24:59.343 "uuid": "f149a3f0-f5e3-4592-9848-ead1c9a52dfb", 00:24:59.343 "strip_size_kb": 64, 00:24:59.343 "state": "configuring", 00:24:59.343 "raid_level": "raid5f", 00:24:59.343 "superblock": true, 00:24:59.343 "num_base_bdevs": 4, 00:24:59.343 "num_base_bdevs_discovered": 2, 00:24:59.343 "num_base_bdevs_operational": 3, 00:24:59.343 "base_bdevs_list": [ 00:24:59.344 { 00:24:59.344 "name": null, 00:24:59.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.344 "is_configured": false, 00:24:59.344 "data_offset": 2048, 00:24:59.344 "data_size": 63488 00:24:59.344 }, 00:24:59.344 { 00:24:59.344 "name": "pt2", 00:24:59.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:59.344 "is_configured": true, 00:24:59.344 "data_offset": 2048, 00:24:59.344 "data_size": 63488 00:24:59.344 }, 00:24:59.344 { 00:24:59.344 "name": "pt3", 00:24:59.344 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:59.344 "is_configured": true, 00:24:59.344 "data_offset": 2048, 00:24:59.344 "data_size": 63488 00:24:59.344 }, 00:24:59.344 { 00:24:59.344 "name": null, 00:24:59.344 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:59.344 "is_configured": false, 00:24:59.344 "data_offset": 2048, 00:24:59.344 "data_size": 63488 00:24:59.344 } 00:24:59.344 ] 00:24:59.344 }' 00:24:59.344 06:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:59.344 06:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.910 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:24:59.910 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:24:59.910 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:24:59.910 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:00.168 [2024-08-14 06:56:27.356859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:00.168 [2024-08-14 06:56:27.356936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.168 [2024-08-14 06:56:27.356963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:00.168 [2024-08-14 06:56:27.356973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.168 [2024-08-14 06:56:27.357470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.168 [2024-08-14 06:56:27.357496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:00.168 [2024-08-14 06:56:27.357583] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:00.168 [2024-08-14 06:56:27.357607] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:00.168 [2024-08-14 06:56:27.357724] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:25:00.168 [2024-08-14 06:56:27.357733] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:00.168 [2024-08-14 06:56:27.358003] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:25:00.168 [2024-08-14 06:56:27.358612] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:25:00.168 [2024-08-14 06:56:27.358634] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:25:00.168 [2024-08-14 06:56:27.358886] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.168 pt4 00:25:00.168 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:00.168 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:00.168 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:00.168 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:00.168 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:00.168 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:00.168 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:00.168 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:00.168 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:00.168 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:00.168 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.168 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.427 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:00.427 "name": "raid_bdev1", 00:25:00.427 "uuid": "f149a3f0-f5e3-4592-9848-ead1c9a52dfb", 00:25:00.427 "strip_size_kb": 64, 00:25:00.427 "state": "online", 00:25:00.427 "raid_level": "raid5f", 00:25:00.427 "superblock": true, 00:25:00.427 "num_base_bdevs": 4, 00:25:00.427 "num_base_bdevs_discovered": 3, 00:25:00.427 "num_base_bdevs_operational": 3, 00:25:00.427 "base_bdevs_list": [ 00:25:00.427 { 00:25:00.427 "name": null, 00:25:00.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.427 "is_configured": false, 00:25:00.427 "data_offset": 2048, 00:25:00.427 "data_size": 63488 00:25:00.427 }, 00:25:00.427 { 00:25:00.427 "name": "pt2", 00:25:00.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:00.428 "is_configured": true, 00:25:00.428 "data_offset": 2048, 00:25:00.428 "data_size": 63488 00:25:00.428 }, 00:25:00.428 { 00:25:00.428 "name": "pt3", 00:25:00.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:00.428 "is_configured": true, 00:25:00.428 "data_offset": 2048, 00:25:00.428 "data_size": 63488 00:25:00.428 }, 00:25:00.428 { 00:25:00.428 "name": "pt4", 00:25:00.428 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:00.428 "is_configured": true, 00:25:00.428 "data_offset": 2048, 00:25:00.428 "data_size": 63488 00:25:00.428 } 00:25:00.428 ] 00:25:00.428 }' 00:25:00.428 06:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:00.428 06:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.364 06:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:01.364 [2024-08-14 06:56:28.523235] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:01.364 [2024-08-14 06:56:28.523382] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:01.364 [2024-08-14 06:56:28.523511] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:01.364 [2024-08-14 06:56:28.523636] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:01.364 [2024-08-14 06:56:28.523707] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:25:01.364 06:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.364 06:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:25:01.623 06:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:25:01.623 06:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:25:01.623 06:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:25:01.623 06:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:25:01.623 06:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:01.883 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:02.142 [2024-08-14 06:56:29.310323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:02.142 [2024-08-14 06:56:29.310507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.142 [2024-08-14 06:56:29.310552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:02.142 [2024-08-14 06:56:29.310607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.142 [2024-08-14 06:56:29.313321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.142 [2024-08-14 06:56:29.313439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:02.142 [2024-08-14 06:56:29.313597] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:02.142 [2024-08-14 06:56:29.313710] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:02.142 pt1 00:25:02.142 [2024-08-14 06:56:29.313913] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:02.142 [2024-08-14 06:56:29.313941] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:02.142 [2024-08-14 06:56:29.313963] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:25:02.142 [2024-08-14 06:56:29.314024] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:02.142 [2024-08-14 06:56:29.314187] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:02.142 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:25:02.142 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:02.142 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:02.142 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:02.142 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:02.142 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:02.142 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:02.142 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:02.142 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:02.142 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:02.142 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:02.142 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.142 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.402 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:02.402 "name": "raid_bdev1", 00:25:02.402 "uuid": "f149a3f0-f5e3-4592-9848-ead1c9a52dfb", 00:25:02.402 "strip_size_kb": 64, 00:25:02.402 "state": "configuring", 00:25:02.402 "raid_level": "raid5f", 00:25:02.402 "superblock": true, 00:25:02.402 "num_base_bdevs": 4, 00:25:02.402 "num_base_bdevs_discovered": 2, 00:25:02.402 "num_base_bdevs_operational": 3, 00:25:02.402 "base_bdevs_list": [ 00:25:02.402 { 00:25:02.402 "name": null, 00:25:02.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.402 "is_configured": false, 00:25:02.402 "data_offset": 2048, 00:25:02.402 "data_size": 63488 00:25:02.402 }, 00:25:02.402 { 00:25:02.402 "name": "pt2", 00:25:02.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:02.402 "is_configured": true, 00:25:02.402 "data_offset": 2048, 00:25:02.402 "data_size": 63488 00:25:02.402 }, 00:25:02.402 { 00:25:02.402 "name": "pt3", 00:25:02.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:02.402 "is_configured": true, 00:25:02.402 "data_offset": 2048, 00:25:02.402 "data_size": 63488 00:25:02.402 }, 00:25:02.402 { 00:25:02.402 "name": null, 00:25:02.402 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:02.402 "is_configured": false, 00:25:02.402 "data_offset": 2048, 00:25:02.402 "data_size": 63488 00:25:02.402 } 00:25:02.402 ] 00:25:02.402 }' 00:25:02.402 06:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:02.402 06:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.350 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:03.350 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:25:03.350 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:25:03.350 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:03.608 [2024-08-14 06:56:30.737074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:03.608 [2024-08-14 06:56:30.737273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.608 [2024-08-14 06:56:30.737340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:25:03.608 [2024-08-14 06:56:30.737379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.608 [2024-08-14 06:56:30.737888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.608 [2024-08-14 06:56:30.737962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:03.608 [2024-08-14 06:56:30.738105] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:03.608 [2024-08-14 06:56:30.738194] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:03.608 [2024-08-14 06:56:30.738387] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:25:03.608 [2024-08-14 06:56:30.738434] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:03.608 [2024-08-14 06:56:30.738765] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:25:03.608 [2024-08-14 06:56:30.739479] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:25:03.608 [2024-08-14 06:56:30.739553] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:25:03.608 [2024-08-14 06:56:30.739813] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.608 pt4 00:25:03.608 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:03.608 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:03.608 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:03.608 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:03.608 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:03.608 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:03.608 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:03.608 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:03.608 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:03.608 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:03.608 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.608 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.867 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:03.867 "name": "raid_bdev1", 00:25:03.867 "uuid": "f149a3f0-f5e3-4592-9848-ead1c9a52dfb", 00:25:03.867 "strip_size_kb": 64, 00:25:03.867 "state": "online", 00:25:03.867 "raid_level": "raid5f", 00:25:03.867 "superblock": true, 00:25:03.867 "num_base_bdevs": 4, 00:25:03.867 "num_base_bdevs_discovered": 3, 00:25:03.867 "num_base_bdevs_operational": 3, 00:25:03.867 "base_bdevs_list": [ 00:25:03.867 { 00:25:03.867 "name": null, 00:25:03.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.867 "is_configured": false, 00:25:03.867 "data_offset": 2048, 00:25:03.867 "data_size": 63488 00:25:03.867 }, 00:25:03.867 { 00:25:03.867 "name": "pt2", 00:25:03.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:03.867 "is_configured": true, 00:25:03.867 "data_offset": 2048, 00:25:03.867 "data_size": 63488 00:25:03.867 }, 00:25:03.867 { 00:25:03.867 "name": "pt3", 00:25:03.867 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:03.867 "is_configured": true, 00:25:03.867 "data_offset": 2048, 00:25:03.867 "data_size": 63488 00:25:03.867 }, 00:25:03.867 { 00:25:03.867 "name": "pt4", 00:25:03.867 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:03.867 "is_configured": true, 00:25:03.867 "data_offset": 2048, 00:25:03.867 "data_size": 63488 00:25:03.867 } 00:25:03.867 ] 00:25:03.867 }' 00:25:03.867 06:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:03.867 06:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.434 06:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:25:04.434 06:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:04.692 06:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:25:04.692 06:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:04.692 06:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:25:04.951 [2024-08-14 06:56:32.003720] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:04.951 06:56:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' f149a3f0-f5e3-4592-9848-ead1c9a52dfb '!=' f149a3f0-f5e3-4592-9848-ead1c9a52dfb ']' 00:25:04.951 06:56:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 103910 00:25:04.951 06:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 103910 ']' 00:25:04.951 06:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # kill -0 103910 00:25:04.951 06:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # uname 00:25:04.951 06:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:04.951 06:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103910 00:25:04.951 06:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:04.951 06:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:04.951 06:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103910' 00:25:04.951 killing process with pid 103910 00:25:04.951 06:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@965 -- # kill 103910 00:25:04.951 [2024-08-14 06:56:32.061031] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:04.951 06:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # wait 103910 00:25:04.951 [2024-08-14 06:56:32.061220] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:04.951 [2024-08-14 06:56:32.061350] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:04.951 [2024-08-14 06:56:32.061417] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:25:04.951 [2024-08-14 06:56:32.106415] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:05.210 06:56:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:25:05.210 00:25:05.210 real 0m26.258s 00:25:05.210 user 0m48.573s 00:25:05.210 sys 0m4.059s 00:25:05.210 06:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:05.210 06:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.210 ************************************ 00:25:05.210 END TEST raid5f_superblock_test 00:25:05.210 ************************************ 00:25:05.210 06:56:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # '[' true = true ']' 00:25:05.210 06:56:32 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:25:05.210 06:56:32 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:25:05.210 06:56:32 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:05.210 06:56:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:05.210 ************************************ 00:25:05.210 START TEST raid5f_rebuild_test 00:25:05.210 ************************************ 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 4 false false true 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=104726 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 104726 /var/tmp/spdk-raid.sock 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 104726 ']' 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:05.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:05.210 06:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.469 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:05.469 Zero copy mechanism will not be used. 00:25:05.469 [2024-08-14 06:56:32.526673] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:25:05.469 [2024-08-14 06:56:32.526829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104726 ] 00:25:05.469 [2024-08-14 06:56:32.671822] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.469 [2024-08-14 06:56:32.720514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.728 [2024-08-14 06:56:32.763874] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:05.728 [2024-08-14 06:56:32.763916] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:06.295 06:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:06.295 06:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:25:06.295 06:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:06.295 06:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:06.553 BaseBdev1_malloc 00:25:06.553 06:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:06.815 [2024-08-14 06:56:33.829564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:06.815 [2024-08-14 06:56:33.829710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.815 [2024-08-14 06:56:33.829761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:25:06.815 [2024-08-14 06:56:33.829775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.815 [2024-08-14 06:56:33.832125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.815 [2024-08-14 06:56:33.832180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:06.815 BaseBdev1 00:25:06.815 06:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:06.815 06:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:07.074 BaseBdev2_malloc 00:25:07.074 06:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:07.074 [2024-08-14 06:56:34.302268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:07.074 [2024-08-14 06:56:34.302360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.074 [2024-08-14 06:56:34.302388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:07.074 [2024-08-14 06:56:34.302401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.074 [2024-08-14 06:56:34.304717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.074 [2024-08-14 06:56:34.304766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:07.074 BaseBdev2 00:25:07.074 06:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:07.074 06:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:07.333 BaseBdev3_malloc 00:25:07.593 06:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:07.593 [2024-08-14 06:56:34.830589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:07.593 [2024-08-14 06:56:34.830772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.593 [2024-08-14 06:56:34.830806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:07.593 [2024-08-14 06:56:34.830820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.593 [2024-08-14 06:56:34.833375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.593 [2024-08-14 06:56:34.833424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:07.593 BaseBdev3 00:25:07.851 06:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:07.851 06:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:07.851 BaseBdev4_malloc 00:25:07.851 06:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:08.110 [2024-08-14 06:56:35.323331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:08.110 [2024-08-14 06:56:35.323544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:08.110 [2024-08-14 06:56:35.323598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:08.110 [2024-08-14 06:56:35.323660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:08.110 [2024-08-14 06:56:35.326127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:08.110 [2024-08-14 06:56:35.326231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:08.110 BaseBdev4 00:25:08.110 06:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:08.369 spare_malloc 00:25:08.369 06:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:08.628 spare_delay 00:25:08.628 06:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:08.887 [2024-08-14 06:56:36.087682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:08.887 [2024-08-14 06:56:36.087766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:08.887 [2024-08-14 06:56:36.087793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:08.887 [2024-08-14 06:56:36.087805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:08.887 [2024-08-14 06:56:36.090465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:08.887 spare 00:25:08.887 [2024-08-14 06:56:36.090583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:08.887 06:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:09.146 [2024-08-14 06:56:36.331447] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:09.146 [2024-08-14 06:56:36.333737] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:09.146 [2024-08-14 06:56:36.333855] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:09.146 [2024-08-14 06:56:36.333963] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:09.146 [2024-08-14 06:56:36.334143] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:25:09.146 [2024-08-14 06:56:36.334219] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:09.146 [2024-08-14 06:56:36.334628] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:25:09.146 [2024-08-14 06:56:36.335288] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:25:09.146 [2024-08-14 06:56:36.335351] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:25:09.146 [2024-08-14 06:56:36.335618] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:09.146 06:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:09.146 06:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:09.146 06:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:09.146 06:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:09.146 06:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:09.146 06:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:09.146 06:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:09.146 06:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:09.146 06:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:09.146 06:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:09.146 06:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.146 06:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.405 06:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:09.405 "name": "raid_bdev1", 00:25:09.405 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:09.405 "strip_size_kb": 64, 00:25:09.405 "state": "online", 00:25:09.405 "raid_level": "raid5f", 00:25:09.405 "superblock": false, 00:25:09.405 "num_base_bdevs": 4, 00:25:09.405 "num_base_bdevs_discovered": 4, 00:25:09.405 "num_base_bdevs_operational": 4, 00:25:09.405 "base_bdevs_list": [ 00:25:09.405 { 00:25:09.405 "name": "BaseBdev1", 00:25:09.405 "uuid": "9b9919fa-2de4-5b87-a429-1b918aa8631d", 00:25:09.405 "is_configured": true, 00:25:09.405 "data_offset": 0, 00:25:09.405 "data_size": 65536 00:25:09.405 }, 00:25:09.405 { 00:25:09.405 "name": "BaseBdev2", 00:25:09.405 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:09.405 "is_configured": true, 00:25:09.405 "data_offset": 0, 00:25:09.405 "data_size": 65536 00:25:09.405 }, 00:25:09.405 { 00:25:09.405 "name": "BaseBdev3", 00:25:09.405 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:09.405 "is_configured": true, 00:25:09.405 "data_offset": 0, 00:25:09.405 "data_size": 65536 00:25:09.405 }, 00:25:09.405 { 00:25:09.405 "name": "BaseBdev4", 00:25:09.405 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:09.405 "is_configured": true, 00:25:09.405 "data_offset": 0, 00:25:09.405 "data_size": 65536 00:25:09.405 } 00:25:09.405 ] 00:25:09.405 }' 00:25:09.405 06:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:09.405 06:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.343 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:10.343 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:25:10.343 [2024-08-14 06:56:37.469975] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:10.343 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=196608 00:25:10.343 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:10.343 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.602 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:25:10.602 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:25:10.602 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:25:10.602 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:25:10.602 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:10.602 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:10.602 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:10.602 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:10.602 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:10.602 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:10.602 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:25:10.602 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:10.602 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:10.602 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:10.878 [2024-08-14 06:56:37.952982] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:25:10.878 /dev/nbd0 00:25:10.878 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:10.878 06:56:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:10.878 06:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:25:10.878 06:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:25:10.878 06:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:25:10.878 06:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:25:10.878 06:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:25:10.878 06:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:25:10.878 06:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:25:10.878 06:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:25:10.878 06:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:10.878 1+0 records in 00:25:10.878 1+0 records out 00:25:10.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510462 s, 8.0 MB/s 00:25:10.878 06:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:10.878 06:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:25:10.878 06:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:10.878 06:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:25:10.878 06:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:25:10.878 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:10.878 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:10.878 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:25:10.878 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # write_unit_size=384 00:25:10.878 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # echo 192 00:25:10.878 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:25:11.449 512+0 records in 00:25:11.449 512+0 records out 00:25:11.449 100663296 bytes (101 MB, 96 MiB) copied, 0.51663 s, 195 MB/s 00:25:11.449 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:11.449 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:11.449 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:11.449 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:11.449 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:25:11.449 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:11.449 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:11.707 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:11.707 [2024-08-14 06:56:38.775508] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:11.707 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:11.707 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:11.707 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:11.707 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:11.707 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:11.707 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:11.707 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:11.707 06:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:11.964 [2024-08-14 06:56:39.015264] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:11.964 06:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:11.964 06:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:11.964 06:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:11.964 06:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:11.964 06:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:11.964 06:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:11.964 06:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:11.964 06:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:11.964 06:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:11.964 06:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:11.964 06:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.964 06:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.223 06:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:12.223 "name": "raid_bdev1", 00:25:12.223 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:12.223 "strip_size_kb": 64, 00:25:12.223 "state": "online", 00:25:12.223 "raid_level": "raid5f", 00:25:12.223 "superblock": false, 00:25:12.223 "num_base_bdevs": 4, 00:25:12.223 "num_base_bdevs_discovered": 3, 00:25:12.223 "num_base_bdevs_operational": 3, 00:25:12.223 "base_bdevs_list": [ 00:25:12.223 { 00:25:12.223 "name": null, 00:25:12.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.223 "is_configured": false, 00:25:12.223 "data_offset": 0, 00:25:12.223 "data_size": 65536 00:25:12.223 }, 00:25:12.223 { 00:25:12.223 "name": "BaseBdev2", 00:25:12.223 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:12.223 "is_configured": true, 00:25:12.223 "data_offset": 0, 00:25:12.223 "data_size": 65536 00:25:12.223 }, 00:25:12.223 { 00:25:12.223 "name": "BaseBdev3", 00:25:12.223 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:12.223 "is_configured": true, 00:25:12.223 "data_offset": 0, 00:25:12.223 "data_size": 65536 00:25:12.223 }, 00:25:12.223 { 00:25:12.223 "name": "BaseBdev4", 00:25:12.223 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:12.223 "is_configured": true, 00:25:12.223 "data_offset": 0, 00:25:12.223 "data_size": 65536 00:25:12.223 } 00:25:12.223 ] 00:25:12.223 }' 00:25:12.223 06:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:12.223 06:56:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.789 06:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:13.047 [2024-08-14 06:56:40.165777] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:13.047 [2024-08-14 06:56:40.169415] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:25:13.047 [2024-08-14 06:56:40.171835] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:13.047 06:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:13.981 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:13.981 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:13.981 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:13.981 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:13.981 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:13.981 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.981 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.240 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:14.240 "name": "raid_bdev1", 00:25:14.240 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:14.240 "strip_size_kb": 64, 00:25:14.240 "state": "online", 00:25:14.240 "raid_level": "raid5f", 00:25:14.240 "superblock": false, 00:25:14.240 "num_base_bdevs": 4, 00:25:14.240 "num_base_bdevs_discovered": 4, 00:25:14.240 "num_base_bdevs_operational": 4, 00:25:14.240 "process": { 00:25:14.240 "type": "rebuild", 00:25:14.240 "target": "spare", 00:25:14.240 "progress": { 00:25:14.240 "blocks": 23040, 00:25:14.240 "percent": 11 00:25:14.240 } 00:25:14.240 }, 00:25:14.240 "base_bdevs_list": [ 00:25:14.240 { 00:25:14.240 "name": "spare", 00:25:14.240 "uuid": "3059643a-6493-5fa2-ae21-2b146cf8f977", 00:25:14.240 "is_configured": true, 00:25:14.240 "data_offset": 0, 00:25:14.240 "data_size": 65536 00:25:14.240 }, 00:25:14.240 { 00:25:14.240 "name": "BaseBdev2", 00:25:14.240 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:14.240 "is_configured": true, 00:25:14.240 "data_offset": 0, 00:25:14.240 "data_size": 65536 00:25:14.240 }, 00:25:14.240 { 00:25:14.240 "name": "BaseBdev3", 00:25:14.240 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:14.240 "is_configured": true, 00:25:14.240 "data_offset": 0, 00:25:14.240 "data_size": 65536 00:25:14.240 }, 00:25:14.240 { 00:25:14.240 "name": "BaseBdev4", 00:25:14.240 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:14.240 "is_configured": true, 00:25:14.240 "data_offset": 0, 00:25:14.240 "data_size": 65536 00:25:14.240 } 00:25:14.240 ] 00:25:14.240 }' 00:25:14.240 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:14.498 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:14.498 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:14.498 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:14.498 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:14.756 [2024-08-14 06:56:41.790385] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:14.756 [2024-08-14 06:56:41.885179] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:14.756 [2024-08-14 06:56:41.885308] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:14.756 [2024-08-14 06:56:41.885335] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:14.756 [2024-08-14 06:56:41.885349] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:14.756 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:14.756 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:14.756 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:14.756 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:14.756 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:14.756 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:14.756 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:14.756 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:14.756 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:14.756 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:14.756 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.756 06:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.015 06:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:15.015 "name": "raid_bdev1", 00:25:15.015 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:15.015 "strip_size_kb": 64, 00:25:15.015 "state": "online", 00:25:15.015 "raid_level": "raid5f", 00:25:15.015 "superblock": false, 00:25:15.015 "num_base_bdevs": 4, 00:25:15.015 "num_base_bdevs_discovered": 3, 00:25:15.015 "num_base_bdevs_operational": 3, 00:25:15.015 "base_bdevs_list": [ 00:25:15.015 { 00:25:15.015 "name": null, 00:25:15.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.015 "is_configured": false, 00:25:15.015 "data_offset": 0, 00:25:15.015 "data_size": 65536 00:25:15.015 }, 00:25:15.015 { 00:25:15.015 "name": "BaseBdev2", 00:25:15.015 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:15.015 "is_configured": true, 00:25:15.015 "data_offset": 0, 00:25:15.015 "data_size": 65536 00:25:15.015 }, 00:25:15.015 { 00:25:15.015 "name": "BaseBdev3", 00:25:15.015 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:15.015 "is_configured": true, 00:25:15.015 "data_offset": 0, 00:25:15.015 "data_size": 65536 00:25:15.015 }, 00:25:15.015 { 00:25:15.015 "name": "BaseBdev4", 00:25:15.015 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:15.015 "is_configured": true, 00:25:15.015 "data_offset": 0, 00:25:15.015 "data_size": 65536 00:25:15.015 } 00:25:15.015 ] 00:25:15.015 }' 00:25:15.015 06:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:15.015 06:56:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.582 06:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:15.582 06:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:15.582 06:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:15.582 06:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:15.582 06:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:15.582 06:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.582 06:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.840 06:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:15.840 "name": "raid_bdev1", 00:25:15.840 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:15.840 "strip_size_kb": 64, 00:25:15.840 "state": "online", 00:25:15.840 "raid_level": "raid5f", 00:25:15.840 "superblock": false, 00:25:15.840 "num_base_bdevs": 4, 00:25:15.840 "num_base_bdevs_discovered": 3, 00:25:15.840 "num_base_bdevs_operational": 3, 00:25:15.840 "base_bdevs_list": [ 00:25:15.840 { 00:25:15.840 "name": null, 00:25:15.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.840 "is_configured": false, 00:25:15.840 "data_offset": 0, 00:25:15.840 "data_size": 65536 00:25:15.840 }, 00:25:15.840 { 00:25:15.840 "name": "BaseBdev2", 00:25:15.840 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:15.840 "is_configured": true, 00:25:15.840 "data_offset": 0, 00:25:15.840 "data_size": 65536 00:25:15.840 }, 00:25:15.840 { 00:25:15.840 "name": "BaseBdev3", 00:25:15.840 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:15.840 "is_configured": true, 00:25:15.840 "data_offset": 0, 00:25:15.840 "data_size": 65536 00:25:15.840 }, 00:25:15.840 { 00:25:15.840 "name": "BaseBdev4", 00:25:15.840 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:15.840 "is_configured": true, 00:25:15.840 "data_offset": 0, 00:25:15.840 "data_size": 65536 00:25:15.840 } 00:25:15.840 ] 00:25:15.840 }' 00:25:15.840 06:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:15.840 06:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:15.840 06:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:16.098 06:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:16.098 06:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:16.356 [2024-08-14 06:56:43.389347] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:16.356 [2024-08-14 06:56:43.392883] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027e70 00:25:16.356 [2024-08-14 06:56:43.395435] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:16.356 06:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:25:17.294 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:17.294 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:17.294 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:17.294 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:17.294 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:17.294 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.294 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.552 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:17.552 "name": "raid_bdev1", 00:25:17.552 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:17.552 "strip_size_kb": 64, 00:25:17.552 "state": "online", 00:25:17.553 "raid_level": "raid5f", 00:25:17.553 "superblock": false, 00:25:17.553 "num_base_bdevs": 4, 00:25:17.553 "num_base_bdevs_discovered": 4, 00:25:17.553 "num_base_bdevs_operational": 4, 00:25:17.553 "process": { 00:25:17.553 "type": "rebuild", 00:25:17.553 "target": "spare", 00:25:17.553 "progress": { 00:25:17.553 "blocks": 23040, 00:25:17.553 "percent": 11 00:25:17.553 } 00:25:17.553 }, 00:25:17.553 "base_bdevs_list": [ 00:25:17.553 { 00:25:17.553 "name": "spare", 00:25:17.553 "uuid": "3059643a-6493-5fa2-ae21-2b146cf8f977", 00:25:17.553 "is_configured": true, 00:25:17.553 "data_offset": 0, 00:25:17.553 "data_size": 65536 00:25:17.553 }, 00:25:17.553 { 00:25:17.553 "name": "BaseBdev2", 00:25:17.553 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:17.553 "is_configured": true, 00:25:17.553 "data_offset": 0, 00:25:17.553 "data_size": 65536 00:25:17.553 }, 00:25:17.553 { 00:25:17.553 "name": "BaseBdev3", 00:25:17.553 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:17.553 "is_configured": true, 00:25:17.553 "data_offset": 0, 00:25:17.553 "data_size": 65536 00:25:17.553 }, 00:25:17.553 { 00:25:17.553 "name": "BaseBdev4", 00:25:17.553 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:17.553 "is_configured": true, 00:25:17.553 "data_offset": 0, 00:25:17.553 "data_size": 65536 00:25:17.553 } 00:25:17.553 ] 00:25:17.553 }' 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=1163 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.553 06:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.811 06:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:17.811 "name": "raid_bdev1", 00:25:17.811 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:17.811 "strip_size_kb": 64, 00:25:17.811 "state": "online", 00:25:17.811 "raid_level": "raid5f", 00:25:17.811 "superblock": false, 00:25:17.811 "num_base_bdevs": 4, 00:25:17.811 "num_base_bdevs_discovered": 4, 00:25:17.811 "num_base_bdevs_operational": 4, 00:25:17.811 "process": { 00:25:17.811 "type": "rebuild", 00:25:17.811 "target": "spare", 00:25:17.811 "progress": { 00:25:17.811 "blocks": 30720, 00:25:17.811 "percent": 15 00:25:17.811 } 00:25:17.811 }, 00:25:17.811 "base_bdevs_list": [ 00:25:17.811 { 00:25:17.811 "name": "spare", 00:25:17.811 "uuid": "3059643a-6493-5fa2-ae21-2b146cf8f977", 00:25:17.811 "is_configured": true, 00:25:17.811 "data_offset": 0, 00:25:17.811 "data_size": 65536 00:25:17.811 }, 00:25:17.811 { 00:25:17.811 "name": "BaseBdev2", 00:25:17.811 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:17.811 "is_configured": true, 00:25:17.811 "data_offset": 0, 00:25:17.811 "data_size": 65536 00:25:17.811 }, 00:25:17.811 { 00:25:17.811 "name": "BaseBdev3", 00:25:17.811 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:17.811 "is_configured": true, 00:25:17.811 "data_offset": 0, 00:25:17.811 "data_size": 65536 00:25:17.811 }, 00:25:17.811 { 00:25:17.811 "name": "BaseBdev4", 00:25:17.811 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:17.811 "is_configured": true, 00:25:17.811 "data_offset": 0, 00:25:17.811 "data_size": 65536 00:25:17.811 } 00:25:17.811 ] 00:25:17.811 }' 00:25:17.811 06:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:18.133 06:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:18.133 06:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:18.133 06:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:18.133 06:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:19.071 06:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:19.071 06:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:19.071 06:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:19.071 06:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:19.071 06:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:19.071 06:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:19.071 06:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.071 06:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.330 06:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:19.330 "name": "raid_bdev1", 00:25:19.330 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:19.330 "strip_size_kb": 64, 00:25:19.330 "state": "online", 00:25:19.330 "raid_level": "raid5f", 00:25:19.330 "superblock": false, 00:25:19.330 "num_base_bdevs": 4, 00:25:19.330 "num_base_bdevs_discovered": 4, 00:25:19.330 "num_base_bdevs_operational": 4, 00:25:19.330 "process": { 00:25:19.330 "type": "rebuild", 00:25:19.330 "target": "spare", 00:25:19.330 "progress": { 00:25:19.330 "blocks": 55680, 00:25:19.330 "percent": 28 00:25:19.330 } 00:25:19.330 }, 00:25:19.330 "base_bdevs_list": [ 00:25:19.330 { 00:25:19.330 "name": "spare", 00:25:19.330 "uuid": "3059643a-6493-5fa2-ae21-2b146cf8f977", 00:25:19.330 "is_configured": true, 00:25:19.330 "data_offset": 0, 00:25:19.330 "data_size": 65536 00:25:19.330 }, 00:25:19.330 { 00:25:19.330 "name": "BaseBdev2", 00:25:19.330 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:19.330 "is_configured": true, 00:25:19.330 "data_offset": 0, 00:25:19.330 "data_size": 65536 00:25:19.330 }, 00:25:19.330 { 00:25:19.330 "name": "BaseBdev3", 00:25:19.330 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:19.330 "is_configured": true, 00:25:19.330 "data_offset": 0, 00:25:19.330 "data_size": 65536 00:25:19.330 }, 00:25:19.330 { 00:25:19.330 "name": "BaseBdev4", 00:25:19.330 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:19.330 "is_configured": true, 00:25:19.330 "data_offset": 0, 00:25:19.330 "data_size": 65536 00:25:19.331 } 00:25:19.331 ] 00:25:19.331 }' 00:25:19.331 06:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:19.331 06:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:19.331 06:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:19.331 06:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:19.331 06:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:20.268 06:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:20.268 06:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:20.268 06:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:20.268 06:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:20.268 06:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:20.268 06:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:20.268 06:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.268 06:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.527 06:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:20.527 "name": "raid_bdev1", 00:25:20.527 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:20.527 "strip_size_kb": 64, 00:25:20.527 "state": "online", 00:25:20.527 "raid_level": "raid5f", 00:25:20.527 "superblock": false, 00:25:20.527 "num_base_bdevs": 4, 00:25:20.527 "num_base_bdevs_discovered": 4, 00:25:20.527 "num_base_bdevs_operational": 4, 00:25:20.527 "process": { 00:25:20.527 "type": "rebuild", 00:25:20.527 "target": "spare", 00:25:20.527 "progress": { 00:25:20.527 "blocks": 80640, 00:25:20.527 "percent": 41 00:25:20.527 } 00:25:20.527 }, 00:25:20.527 "base_bdevs_list": [ 00:25:20.527 { 00:25:20.527 "name": "spare", 00:25:20.527 "uuid": "3059643a-6493-5fa2-ae21-2b146cf8f977", 00:25:20.527 "is_configured": true, 00:25:20.527 "data_offset": 0, 00:25:20.527 "data_size": 65536 00:25:20.527 }, 00:25:20.527 { 00:25:20.527 "name": "BaseBdev2", 00:25:20.527 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:20.527 "is_configured": true, 00:25:20.527 "data_offset": 0, 00:25:20.527 "data_size": 65536 00:25:20.527 }, 00:25:20.527 { 00:25:20.527 "name": "BaseBdev3", 00:25:20.527 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:20.527 "is_configured": true, 00:25:20.527 "data_offset": 0, 00:25:20.527 "data_size": 65536 00:25:20.527 }, 00:25:20.527 { 00:25:20.527 "name": "BaseBdev4", 00:25:20.527 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:20.527 "is_configured": true, 00:25:20.527 "data_offset": 0, 00:25:20.527 "data_size": 65536 00:25:20.527 } 00:25:20.527 ] 00:25:20.527 }' 00:25:20.527 06:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:20.786 06:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:20.786 06:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:20.786 06:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:20.786 06:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:21.724 06:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:21.724 06:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:21.724 06:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:21.724 06:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:21.724 06:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:21.724 06:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:21.724 06:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.724 06:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.983 06:56:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:21.983 "name": "raid_bdev1", 00:25:21.983 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:21.983 "strip_size_kb": 64, 00:25:21.983 "state": "online", 00:25:21.983 "raid_level": "raid5f", 00:25:21.983 "superblock": false, 00:25:21.983 "num_base_bdevs": 4, 00:25:21.983 "num_base_bdevs_discovered": 4, 00:25:21.983 "num_base_bdevs_operational": 4, 00:25:21.983 "process": { 00:25:21.983 "type": "rebuild", 00:25:21.983 "target": "spare", 00:25:21.983 "progress": { 00:25:21.983 "blocks": 107520, 00:25:21.983 "percent": 54 00:25:21.983 } 00:25:21.983 }, 00:25:21.983 "base_bdevs_list": [ 00:25:21.983 { 00:25:21.983 "name": "spare", 00:25:21.983 "uuid": "3059643a-6493-5fa2-ae21-2b146cf8f977", 00:25:21.983 "is_configured": true, 00:25:21.983 "data_offset": 0, 00:25:21.983 "data_size": 65536 00:25:21.983 }, 00:25:21.983 { 00:25:21.983 "name": "BaseBdev2", 00:25:21.983 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:21.983 "is_configured": true, 00:25:21.983 "data_offset": 0, 00:25:21.983 "data_size": 65536 00:25:21.983 }, 00:25:21.983 { 00:25:21.983 "name": "BaseBdev3", 00:25:21.983 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:21.983 "is_configured": true, 00:25:21.983 "data_offset": 0, 00:25:21.983 "data_size": 65536 00:25:21.983 }, 00:25:21.983 { 00:25:21.983 "name": "BaseBdev4", 00:25:21.984 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:21.984 "is_configured": true, 00:25:21.984 "data_offset": 0, 00:25:21.984 "data_size": 65536 00:25:21.984 } 00:25:21.984 ] 00:25:21.984 }' 00:25:21.984 06:56:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:21.984 06:56:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:21.984 06:56:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:21.984 06:56:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:21.984 06:56:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:23.364 06:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:23.364 06:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:23.364 06:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:23.364 06:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:23.364 06:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:23.364 06:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:23.364 06:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.364 06:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.364 06:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:23.364 "name": "raid_bdev1", 00:25:23.364 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:23.364 "strip_size_kb": 64, 00:25:23.364 "state": "online", 00:25:23.364 "raid_level": "raid5f", 00:25:23.364 "superblock": false, 00:25:23.364 "num_base_bdevs": 4, 00:25:23.364 "num_base_bdevs_discovered": 4, 00:25:23.364 "num_base_bdevs_operational": 4, 00:25:23.364 "process": { 00:25:23.364 "type": "rebuild", 00:25:23.364 "target": "spare", 00:25:23.364 "progress": { 00:25:23.364 "blocks": 132480, 00:25:23.364 "percent": 67 00:25:23.364 } 00:25:23.364 }, 00:25:23.364 "base_bdevs_list": [ 00:25:23.364 { 00:25:23.364 "name": "spare", 00:25:23.364 "uuid": "3059643a-6493-5fa2-ae21-2b146cf8f977", 00:25:23.364 "is_configured": true, 00:25:23.364 "data_offset": 0, 00:25:23.364 "data_size": 65536 00:25:23.364 }, 00:25:23.364 { 00:25:23.364 "name": "BaseBdev2", 00:25:23.364 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:23.364 "is_configured": true, 00:25:23.364 "data_offset": 0, 00:25:23.364 "data_size": 65536 00:25:23.364 }, 00:25:23.364 { 00:25:23.364 "name": "BaseBdev3", 00:25:23.364 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:23.364 "is_configured": true, 00:25:23.364 "data_offset": 0, 00:25:23.364 "data_size": 65536 00:25:23.364 }, 00:25:23.364 { 00:25:23.364 "name": "BaseBdev4", 00:25:23.364 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:23.364 "is_configured": true, 00:25:23.364 "data_offset": 0, 00:25:23.364 "data_size": 65536 00:25:23.364 } 00:25:23.364 ] 00:25:23.364 }' 00:25:23.364 06:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:23.364 06:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:23.364 06:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:23.364 06:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:23.364 06:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:24.302 06:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:24.302 06:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:24.302 06:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:24.302 06:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:24.302 06:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:24.302 06:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:24.302 06:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.302 06:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.562 06:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:24.562 "name": "raid_bdev1", 00:25:24.562 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:24.562 "strip_size_kb": 64, 00:25:24.562 "state": "online", 00:25:24.562 "raid_level": "raid5f", 00:25:24.562 "superblock": false, 00:25:24.562 "num_base_bdevs": 4, 00:25:24.562 "num_base_bdevs_discovered": 4, 00:25:24.562 "num_base_bdevs_operational": 4, 00:25:24.562 "process": { 00:25:24.562 "type": "rebuild", 00:25:24.562 "target": "spare", 00:25:24.562 "progress": { 00:25:24.562 "blocks": 157440, 00:25:24.562 "percent": 80 00:25:24.562 } 00:25:24.562 }, 00:25:24.562 "base_bdevs_list": [ 00:25:24.562 { 00:25:24.562 "name": "spare", 00:25:24.562 "uuid": "3059643a-6493-5fa2-ae21-2b146cf8f977", 00:25:24.562 "is_configured": true, 00:25:24.562 "data_offset": 0, 00:25:24.562 "data_size": 65536 00:25:24.562 }, 00:25:24.562 { 00:25:24.562 "name": "BaseBdev2", 00:25:24.562 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:24.562 "is_configured": true, 00:25:24.562 "data_offset": 0, 00:25:24.562 "data_size": 65536 00:25:24.562 }, 00:25:24.562 { 00:25:24.562 "name": "BaseBdev3", 00:25:24.562 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:24.562 "is_configured": true, 00:25:24.562 "data_offset": 0, 00:25:24.562 "data_size": 65536 00:25:24.562 }, 00:25:24.562 { 00:25:24.562 "name": "BaseBdev4", 00:25:24.562 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:24.562 "is_configured": true, 00:25:24.562 "data_offset": 0, 00:25:24.562 "data_size": 65536 00:25:24.562 } 00:25:24.562 ] 00:25:24.562 }' 00:25:24.562 06:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:24.562 06:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:24.562 06:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:24.562 06:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:24.562 06:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:25.941 06:56:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:25.941 06:56:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:25.941 06:56:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:25.941 06:56:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:25.941 06:56:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:25.941 06:56:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:25.941 06:56:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.941 06:56:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.941 06:56:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:25.941 "name": "raid_bdev1", 00:25:25.941 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:25.941 "strip_size_kb": 64, 00:25:25.941 "state": "online", 00:25:25.941 "raid_level": "raid5f", 00:25:25.941 "superblock": false, 00:25:25.941 "num_base_bdevs": 4, 00:25:25.941 "num_base_bdevs_discovered": 4, 00:25:25.941 "num_base_bdevs_operational": 4, 00:25:25.941 "process": { 00:25:25.941 "type": "rebuild", 00:25:25.941 "target": "spare", 00:25:25.941 "progress": { 00:25:25.941 "blocks": 182400, 00:25:25.941 "percent": 92 00:25:25.941 } 00:25:25.941 }, 00:25:25.941 "base_bdevs_list": [ 00:25:25.941 { 00:25:25.941 "name": "spare", 00:25:25.941 "uuid": "3059643a-6493-5fa2-ae21-2b146cf8f977", 00:25:25.941 "is_configured": true, 00:25:25.941 "data_offset": 0, 00:25:25.941 "data_size": 65536 00:25:25.941 }, 00:25:25.941 { 00:25:25.941 "name": "BaseBdev2", 00:25:25.941 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:25.941 "is_configured": true, 00:25:25.941 "data_offset": 0, 00:25:25.941 "data_size": 65536 00:25:25.941 }, 00:25:25.941 { 00:25:25.941 "name": "BaseBdev3", 00:25:25.941 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:25.941 "is_configured": true, 00:25:25.941 "data_offset": 0, 00:25:25.941 "data_size": 65536 00:25:25.941 }, 00:25:25.941 { 00:25:25.941 "name": "BaseBdev4", 00:25:25.941 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:25.941 "is_configured": true, 00:25:25.941 "data_offset": 0, 00:25:25.941 "data_size": 65536 00:25:25.941 } 00:25:25.941 ] 00:25:25.941 }' 00:25:25.941 06:56:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:25.941 06:56:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:25.941 06:56:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:25.941 06:56:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:25.941 06:56:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:26.937 [2024-08-14 06:56:53.772545] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:26.938 [2024-08-14 06:56:53.772708] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:26.938 [2024-08-14 06:56:53.772770] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:26.938 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:26.938 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:26.938 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:26.938 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:26.938 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:26.938 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:26.938 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.938 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.212 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:27.212 "name": "raid_bdev1", 00:25:27.212 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:27.212 "strip_size_kb": 64, 00:25:27.212 "state": "online", 00:25:27.212 "raid_level": "raid5f", 00:25:27.212 "superblock": false, 00:25:27.212 "num_base_bdevs": 4, 00:25:27.212 "num_base_bdevs_discovered": 4, 00:25:27.212 "num_base_bdevs_operational": 4, 00:25:27.212 "base_bdevs_list": [ 00:25:27.212 { 00:25:27.212 "name": "spare", 00:25:27.212 "uuid": "3059643a-6493-5fa2-ae21-2b146cf8f977", 00:25:27.212 "is_configured": true, 00:25:27.212 "data_offset": 0, 00:25:27.212 "data_size": 65536 00:25:27.212 }, 00:25:27.212 { 00:25:27.212 "name": "BaseBdev2", 00:25:27.212 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:27.212 "is_configured": true, 00:25:27.212 "data_offset": 0, 00:25:27.212 "data_size": 65536 00:25:27.212 }, 00:25:27.212 { 00:25:27.212 "name": "BaseBdev3", 00:25:27.212 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:27.212 "is_configured": true, 00:25:27.212 "data_offset": 0, 00:25:27.212 "data_size": 65536 00:25:27.212 }, 00:25:27.212 { 00:25:27.212 "name": "BaseBdev4", 00:25:27.212 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:27.212 "is_configured": true, 00:25:27.212 "data_offset": 0, 00:25:27.212 "data_size": 65536 00:25:27.212 } 00:25:27.212 ] 00:25:27.212 }' 00:25:27.212 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:27.212 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:27.212 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:27.471 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:27.471 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:25:27.471 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:27.471 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:27.471 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:27.471 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:27.471 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:27.471 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.471 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:27.730 "name": "raid_bdev1", 00:25:27.730 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:27.730 "strip_size_kb": 64, 00:25:27.730 "state": "online", 00:25:27.730 "raid_level": "raid5f", 00:25:27.730 "superblock": false, 00:25:27.730 "num_base_bdevs": 4, 00:25:27.730 "num_base_bdevs_discovered": 4, 00:25:27.730 "num_base_bdevs_operational": 4, 00:25:27.730 "base_bdevs_list": [ 00:25:27.730 { 00:25:27.730 "name": "spare", 00:25:27.730 "uuid": "3059643a-6493-5fa2-ae21-2b146cf8f977", 00:25:27.730 "is_configured": true, 00:25:27.730 "data_offset": 0, 00:25:27.730 "data_size": 65536 00:25:27.730 }, 00:25:27.730 { 00:25:27.730 "name": "BaseBdev2", 00:25:27.730 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:27.730 "is_configured": true, 00:25:27.730 "data_offset": 0, 00:25:27.730 "data_size": 65536 00:25:27.730 }, 00:25:27.730 { 00:25:27.730 "name": "BaseBdev3", 00:25:27.730 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:27.730 "is_configured": true, 00:25:27.730 "data_offset": 0, 00:25:27.730 "data_size": 65536 00:25:27.730 }, 00:25:27.730 { 00:25:27.730 "name": "BaseBdev4", 00:25:27.730 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:27.730 "is_configured": true, 00:25:27.730 "data_offset": 0, 00:25:27.730 "data_size": 65536 00:25:27.730 } 00:25:27.730 ] 00:25:27.730 }' 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.730 06:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.989 06:56:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:27.989 "name": "raid_bdev1", 00:25:27.989 "uuid": "4243eec8-bcce-4325-bf59-732c21f6e39b", 00:25:27.989 "strip_size_kb": 64, 00:25:27.989 "state": "online", 00:25:27.989 "raid_level": "raid5f", 00:25:27.989 "superblock": false, 00:25:27.989 "num_base_bdevs": 4, 00:25:27.989 "num_base_bdevs_discovered": 4, 00:25:27.989 "num_base_bdevs_operational": 4, 00:25:27.989 "base_bdevs_list": [ 00:25:27.989 { 00:25:27.989 "name": "spare", 00:25:27.989 "uuid": "3059643a-6493-5fa2-ae21-2b146cf8f977", 00:25:27.989 "is_configured": true, 00:25:27.989 "data_offset": 0, 00:25:27.989 "data_size": 65536 00:25:27.989 }, 00:25:27.989 { 00:25:27.990 "name": "BaseBdev2", 00:25:27.990 "uuid": "28e09dcc-b0c7-5e31-9e09-1db3fb129067", 00:25:27.990 "is_configured": true, 00:25:27.990 "data_offset": 0, 00:25:27.990 "data_size": 65536 00:25:27.990 }, 00:25:27.990 { 00:25:27.990 "name": "BaseBdev3", 00:25:27.990 "uuid": "5b09a469-fa43-5391-8d12-710e2da52318", 00:25:27.990 "is_configured": true, 00:25:27.990 "data_offset": 0, 00:25:27.990 "data_size": 65536 00:25:27.990 }, 00:25:27.990 { 00:25:27.990 "name": "BaseBdev4", 00:25:27.990 "uuid": "13e669f2-541a-52a9-9694-387b6b120784", 00:25:27.990 "is_configured": true, 00:25:27.990 "data_offset": 0, 00:25:27.990 "data_size": 65536 00:25:27.990 } 00:25:27.990 ] 00:25:27.990 }' 00:25:27.990 06:56:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:27.990 06:56:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.925 06:56:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:28.925 [2024-08-14 06:56:56.034644] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:28.925 [2024-08-14 06:56:56.034695] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:28.925 [2024-08-14 06:56:56.034791] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:28.925 [2024-08-14 06:56:56.034916] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:28.925 [2024-08-14 06:56:56.034953] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:25:28.925 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.925 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:25:29.184 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:25:29.184 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:25:29.184 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:25:29.184 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:29.184 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:29.184 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:29.184 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:29.184 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:29.184 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:29.184 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:25:29.184 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:29.184 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:29.184 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:29.443 /dev/nbd0 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:29.443 1+0 records in 00:25:29.443 1+0 records out 00:25:29.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421838 s, 9.7 MB/s 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:29.443 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:29.702 /dev/nbd1 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:29.702 1+0 records in 00:25:29.702 1+0 records out 00:25:29.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026856 s, 15.3 MB/s 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:29.702 06:56:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:29.961 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:29.961 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:29.961 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:29.961 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:29.961 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:25:29.961 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:29.961 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:30.220 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:30.220 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:30.220 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:30.220 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:30.220 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:30.220 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:30.220 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:30.220 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:30.220 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:30.220 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 104726 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 104726 ']' 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 104726 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 104726 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:30.479 killing process with pid 104726 00:25:30.479 Received shutdown signal, test time was about 60.000000 seconds 00:25:30.479 00:25:30.479 Latency(us) 00:25:30.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.479 =================================================================================================================== 00:25:30.479 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 104726' 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@965 -- # kill 104726 00:25:30.479 06:56:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # wait 104726 00:25:30.479 [2024-08-14 06:56:57.558185] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:30.479 [2024-08-14 06:56:57.609595] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:25:30.737 00:25:30.737 real 0m25.410s 00:25:30.737 user 0m37.707s 00:25:30.737 sys 0m3.296s 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.737 ************************************ 00:25:30.737 END TEST raid5f_rebuild_test 00:25:30.737 ************************************ 00:25:30.737 06:56:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:25:30.737 06:56:57 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:25:30.737 06:56:57 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:30.737 06:56:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:30.737 ************************************ 00:25:30.737 START TEST raid5f_rebuild_test_sb 00:25:30.737 ************************************ 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 4 true false true 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:30.737 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=105291 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 105291 /var/tmp/spdk-raid.sock 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 105291 ']' 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:30.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.738 06:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:30.738 [2024-08-14 06:56:57.980290] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:25:30.738 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:30.738 Zero copy mechanism will not be used. 00:25:30.738 [2024-08-14 06:56:57.980439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105291 ] 00:25:30.995 [2024-08-14 06:56:58.129850] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.995 [2024-08-14 06:56:58.180134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.995 [2024-08-14 06:56:58.224319] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:30.995 [2024-08-14 06:56:58.224366] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:31.930 06:56:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:31.930 06:56:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:25:31.930 06:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:31.930 06:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:31.930 BaseBdev1_malloc 00:25:31.930 06:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:32.189 [2024-08-14 06:56:59.285410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:32.189 [2024-08-14 06:56:59.285500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:32.189 [2024-08-14 06:56:59.285526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:25:32.189 [2024-08-14 06:56:59.285539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:32.189 [2024-08-14 06:56:59.288106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:32.189 [2024-08-14 06:56:59.288165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:32.189 BaseBdev1 00:25:32.189 06:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:32.189 06:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:32.448 BaseBdev2_malloc 00:25:32.448 06:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:32.707 [2024-08-14 06:56:59.730060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:32.707 [2024-08-14 06:56:59.730151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:32.707 [2024-08-14 06:56:59.730192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:32.707 [2024-08-14 06:56:59.730205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:32.707 [2024-08-14 06:56:59.732495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:32.707 [2024-08-14 06:56:59.732537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:32.707 BaseBdev2 00:25:32.707 06:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:32.707 06:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:32.707 BaseBdev3_malloc 00:25:32.969 06:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:32.969 [2024-08-14 06:57:00.155283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:32.969 [2024-08-14 06:57:00.155367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:32.969 [2024-08-14 06:57:00.155393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:32.969 [2024-08-14 06:57:00.155404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:32.969 [2024-08-14 06:57:00.157752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:32.969 [2024-08-14 06:57:00.157793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:32.969 BaseBdev3 00:25:32.969 06:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:32.969 06:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:33.227 BaseBdev4_malloc 00:25:33.227 06:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:33.484 [2024-08-14 06:57:00.571562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:33.484 [2024-08-14 06:57:00.571659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:33.484 [2024-08-14 06:57:00.571688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:33.484 [2024-08-14 06:57:00.571703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:33.484 [2024-08-14 06:57:00.574039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:33.484 [2024-08-14 06:57:00.574084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:33.484 BaseBdev4 00:25:33.484 06:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:33.741 spare_malloc 00:25:33.741 06:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:33.999 spare_delay 00:25:34.000 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:34.000 [2024-08-14 06:57:01.243631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:34.000 [2024-08-14 06:57:01.243711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:34.000 [2024-08-14 06:57:01.243740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:34.000 [2024-08-14 06:57:01.243752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:34.000 [2024-08-14 06:57:01.246150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:34.000 [2024-08-14 06:57:01.246209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:34.000 spare 00:25:34.258 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:34.258 [2024-08-14 06:57:01.471403] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:34.258 [2024-08-14 06:57:01.473374] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:34.258 [2024-08-14 06:57:01.473448] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:34.258 [2024-08-14 06:57:01.473495] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:34.258 [2024-08-14 06:57:01.473706] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:25:34.258 [2024-08-14 06:57:01.473726] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:34.258 [2024-08-14 06:57:01.474047] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:25:34.258 [2024-08-14 06:57:01.474581] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:25:34.258 [2024-08-14 06:57:01.474600] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:25:34.258 [2024-08-14 06:57:01.474785] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:34.258 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:34.258 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:34.258 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:34.258 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:34.258 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:34.258 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:34.258 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:34.258 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:34.258 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:34.258 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:34.258 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.258 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.516 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:34.516 "name": "raid_bdev1", 00:25:34.516 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:34.516 "strip_size_kb": 64, 00:25:34.516 "state": "online", 00:25:34.516 "raid_level": "raid5f", 00:25:34.516 "superblock": true, 00:25:34.516 "num_base_bdevs": 4, 00:25:34.516 "num_base_bdevs_discovered": 4, 00:25:34.516 "num_base_bdevs_operational": 4, 00:25:34.516 "base_bdevs_list": [ 00:25:34.516 { 00:25:34.516 "name": "BaseBdev1", 00:25:34.516 "uuid": "b9ef2787-a8b6-5f5b-859a-d169bb3e4cf0", 00:25:34.516 "is_configured": true, 00:25:34.516 "data_offset": 2048, 00:25:34.516 "data_size": 63488 00:25:34.516 }, 00:25:34.516 { 00:25:34.516 "name": "BaseBdev2", 00:25:34.516 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:34.516 "is_configured": true, 00:25:34.516 "data_offset": 2048, 00:25:34.516 "data_size": 63488 00:25:34.516 }, 00:25:34.516 { 00:25:34.516 "name": "BaseBdev3", 00:25:34.516 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:34.516 "is_configured": true, 00:25:34.516 "data_offset": 2048, 00:25:34.516 "data_size": 63488 00:25:34.516 }, 00:25:34.516 { 00:25:34.516 "name": "BaseBdev4", 00:25:34.516 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:34.516 "is_configured": true, 00:25:34.516 "data_offset": 2048, 00:25:34.516 "data_size": 63488 00:25:34.516 } 00:25:34.516 ] 00:25:34.516 }' 00:25:34.516 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:34.516 06:57:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:35.484 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:25:35.484 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:35.484 [2024-08-14 06:57:02.574992] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:35.484 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=190464 00:25:35.485 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.485 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:35.774 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:25:35.774 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:25:35.774 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:25:35.774 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:25:35.774 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:35.774 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:35.774 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:35.774 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:35.774 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:35.774 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:35.774 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:35.774 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:35.774 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:35.774 06:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:36.034 [2024-08-14 06:57:03.050084] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:25:36.034 /dev/nbd0 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:36.034 1+0 records in 00:25:36.034 1+0 records out 00:25:36.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265501 s, 15.4 MB/s 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # write_unit_size=384 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # echo 192 00:25:36.034 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:25:36.604 496+0 records in 00:25:36.604 496+0 records out 00:25:36.604 97517568 bytes (98 MB, 93 MiB) copied, 0.483363 s, 202 MB/s 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:36.604 [2024-08-14 06:57:03.831459] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:36.604 06:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:36.864 [2024-08-14 06:57:04.043249] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:36.864 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:36.864 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:36.864 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:36.864 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:36.864 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:36.864 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:36.864 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:36.864 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:36.864 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:36.864 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:36.864 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.864 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.124 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:37.124 "name": "raid_bdev1", 00:25:37.124 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:37.124 "strip_size_kb": 64, 00:25:37.124 "state": "online", 00:25:37.124 "raid_level": "raid5f", 00:25:37.124 "superblock": true, 00:25:37.124 "num_base_bdevs": 4, 00:25:37.124 "num_base_bdevs_discovered": 3, 00:25:37.124 "num_base_bdevs_operational": 3, 00:25:37.124 "base_bdevs_list": [ 00:25:37.124 { 00:25:37.124 "name": null, 00:25:37.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.124 "is_configured": false, 00:25:37.124 "data_offset": 2048, 00:25:37.124 "data_size": 63488 00:25:37.124 }, 00:25:37.124 { 00:25:37.124 "name": "BaseBdev2", 00:25:37.124 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:37.124 "is_configured": true, 00:25:37.124 "data_offset": 2048, 00:25:37.124 "data_size": 63488 00:25:37.124 }, 00:25:37.124 { 00:25:37.124 "name": "BaseBdev3", 00:25:37.124 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:37.124 "is_configured": true, 00:25:37.124 "data_offset": 2048, 00:25:37.124 "data_size": 63488 00:25:37.124 }, 00:25:37.124 { 00:25:37.124 "name": "BaseBdev4", 00:25:37.124 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:37.124 "is_configured": true, 00:25:37.124 "data_offset": 2048, 00:25:37.124 "data_size": 63488 00:25:37.124 } 00:25:37.124 ] 00:25:37.124 }' 00:25:37.124 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:37.124 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:37.693 06:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:37.951 [2024-08-14 06:57:05.089567] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:37.951 [2024-08-14 06:57:05.093190] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:25:37.951 [2024-08-14 06:57:05.095553] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:37.951 06:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:38.887 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:38.887 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:38.887 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:38.887 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:38.887 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:38.887 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.887 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.145 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:39.145 "name": "raid_bdev1", 00:25:39.145 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:39.145 "strip_size_kb": 64, 00:25:39.145 "state": "online", 00:25:39.145 "raid_level": "raid5f", 00:25:39.145 "superblock": true, 00:25:39.145 "num_base_bdevs": 4, 00:25:39.145 "num_base_bdevs_discovered": 4, 00:25:39.146 "num_base_bdevs_operational": 4, 00:25:39.146 "process": { 00:25:39.146 "type": "rebuild", 00:25:39.146 "target": "spare", 00:25:39.146 "progress": { 00:25:39.146 "blocks": 23040, 00:25:39.146 "percent": 12 00:25:39.146 } 00:25:39.146 }, 00:25:39.146 "base_bdevs_list": [ 00:25:39.146 { 00:25:39.146 "name": "spare", 00:25:39.146 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:25:39.146 "is_configured": true, 00:25:39.146 "data_offset": 2048, 00:25:39.146 "data_size": 63488 00:25:39.146 }, 00:25:39.146 { 00:25:39.146 "name": "BaseBdev2", 00:25:39.146 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:39.146 "is_configured": true, 00:25:39.146 "data_offset": 2048, 00:25:39.146 "data_size": 63488 00:25:39.146 }, 00:25:39.146 { 00:25:39.146 "name": "BaseBdev3", 00:25:39.146 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:39.146 "is_configured": true, 00:25:39.146 "data_offset": 2048, 00:25:39.146 "data_size": 63488 00:25:39.146 }, 00:25:39.146 { 00:25:39.146 "name": "BaseBdev4", 00:25:39.146 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:39.146 "is_configured": true, 00:25:39.146 "data_offset": 2048, 00:25:39.146 "data_size": 63488 00:25:39.146 } 00:25:39.146 ] 00:25:39.146 }' 00:25:39.146 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:39.405 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:39.405 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:39.405 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:39.405 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:39.663 [2024-08-14 06:57:06.679209] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:39.663 [2024-08-14 06:57:06.708221] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:39.663 [2024-08-14 06:57:06.708309] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:39.663 [2024-08-14 06:57:06.708331] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:39.663 [2024-08-14 06:57:06.708343] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:39.663 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:39.663 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:39.663 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:39.663 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:39.663 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:39.663 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:39.663 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:39.663 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:39.663 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:39.663 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:39.663 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.663 06:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.921 06:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:39.921 "name": "raid_bdev1", 00:25:39.921 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:39.921 "strip_size_kb": 64, 00:25:39.921 "state": "online", 00:25:39.921 "raid_level": "raid5f", 00:25:39.921 "superblock": true, 00:25:39.921 "num_base_bdevs": 4, 00:25:39.921 "num_base_bdevs_discovered": 3, 00:25:39.921 "num_base_bdevs_operational": 3, 00:25:39.921 "base_bdevs_list": [ 00:25:39.921 { 00:25:39.921 "name": null, 00:25:39.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.921 "is_configured": false, 00:25:39.921 "data_offset": 2048, 00:25:39.921 "data_size": 63488 00:25:39.921 }, 00:25:39.921 { 00:25:39.921 "name": "BaseBdev2", 00:25:39.921 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:39.921 "is_configured": true, 00:25:39.921 "data_offset": 2048, 00:25:39.921 "data_size": 63488 00:25:39.921 }, 00:25:39.921 { 00:25:39.921 "name": "BaseBdev3", 00:25:39.921 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:39.921 "is_configured": true, 00:25:39.921 "data_offset": 2048, 00:25:39.921 "data_size": 63488 00:25:39.921 }, 00:25:39.921 { 00:25:39.921 "name": "BaseBdev4", 00:25:39.921 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:39.921 "is_configured": true, 00:25:39.921 "data_offset": 2048, 00:25:39.921 "data_size": 63488 00:25:39.921 } 00:25:39.921 ] 00:25:39.921 }' 00:25:39.921 06:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:39.921 06:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.486 06:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:40.486 06:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:40.486 06:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:40.486 06:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:40.486 06:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:40.486 06:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.486 06:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.744 06:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:40.744 "name": "raid_bdev1", 00:25:40.744 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:40.744 "strip_size_kb": 64, 00:25:40.744 "state": "online", 00:25:40.744 "raid_level": "raid5f", 00:25:40.744 "superblock": true, 00:25:40.744 "num_base_bdevs": 4, 00:25:40.744 "num_base_bdevs_discovered": 3, 00:25:40.744 "num_base_bdevs_operational": 3, 00:25:40.744 "base_bdevs_list": [ 00:25:40.744 { 00:25:40.744 "name": null, 00:25:40.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.744 "is_configured": false, 00:25:40.744 "data_offset": 2048, 00:25:40.744 "data_size": 63488 00:25:40.744 }, 00:25:40.744 { 00:25:40.744 "name": "BaseBdev2", 00:25:40.744 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:40.744 "is_configured": true, 00:25:40.744 "data_offset": 2048, 00:25:40.744 "data_size": 63488 00:25:40.744 }, 00:25:40.744 { 00:25:40.744 "name": "BaseBdev3", 00:25:40.744 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:40.744 "is_configured": true, 00:25:40.744 "data_offset": 2048, 00:25:40.744 "data_size": 63488 00:25:40.744 }, 00:25:40.744 { 00:25:40.744 "name": "BaseBdev4", 00:25:40.744 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:40.744 "is_configured": true, 00:25:40.744 "data_offset": 2048, 00:25:40.744 "data_size": 63488 00:25:40.744 } 00:25:40.744 ] 00:25:40.744 }' 00:25:40.744 06:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:41.002 06:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:41.002 06:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:41.002 06:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:41.002 06:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:41.260 [2024-08-14 06:57:08.272170] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:41.260 [2024-08-14 06:57:08.275738] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027170 00:25:41.260 [2024-08-14 06:57:08.278250] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:41.260 06:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:25:42.195 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:42.195 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:42.195 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:42.195 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:42.195 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:42.195 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.195 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:42.453 "name": "raid_bdev1", 00:25:42.453 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:42.453 "strip_size_kb": 64, 00:25:42.453 "state": "online", 00:25:42.453 "raid_level": "raid5f", 00:25:42.453 "superblock": true, 00:25:42.453 "num_base_bdevs": 4, 00:25:42.453 "num_base_bdevs_discovered": 4, 00:25:42.453 "num_base_bdevs_operational": 4, 00:25:42.453 "process": { 00:25:42.453 "type": "rebuild", 00:25:42.453 "target": "spare", 00:25:42.453 "progress": { 00:25:42.453 "blocks": 23040, 00:25:42.453 "percent": 12 00:25:42.453 } 00:25:42.453 }, 00:25:42.453 "base_bdevs_list": [ 00:25:42.453 { 00:25:42.453 "name": "spare", 00:25:42.453 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:25:42.453 "is_configured": true, 00:25:42.453 "data_offset": 2048, 00:25:42.453 "data_size": 63488 00:25:42.453 }, 00:25:42.453 { 00:25:42.453 "name": "BaseBdev2", 00:25:42.453 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:42.453 "is_configured": true, 00:25:42.453 "data_offset": 2048, 00:25:42.453 "data_size": 63488 00:25:42.453 }, 00:25:42.453 { 00:25:42.453 "name": "BaseBdev3", 00:25:42.453 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:42.453 "is_configured": true, 00:25:42.453 "data_offset": 2048, 00:25:42.453 "data_size": 63488 00:25:42.453 }, 00:25:42.453 { 00:25:42.453 "name": "BaseBdev4", 00:25:42.453 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:42.453 "is_configured": true, 00:25:42.453 "data_offset": 2048, 00:25:42.453 "data_size": 63488 00:25:42.453 } 00:25:42.453 ] 00:25:42.453 }' 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:25:42.453 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=1188 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.453 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.712 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:42.712 "name": "raid_bdev1", 00:25:42.712 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:42.712 "strip_size_kb": 64, 00:25:42.712 "state": "online", 00:25:42.712 "raid_level": "raid5f", 00:25:42.712 "superblock": true, 00:25:42.712 "num_base_bdevs": 4, 00:25:42.712 "num_base_bdevs_discovered": 4, 00:25:42.712 "num_base_bdevs_operational": 4, 00:25:42.712 "process": { 00:25:42.712 "type": "rebuild", 00:25:42.712 "target": "spare", 00:25:42.712 "progress": { 00:25:42.712 "blocks": 28800, 00:25:42.712 "percent": 15 00:25:42.712 } 00:25:42.712 }, 00:25:42.712 "base_bdevs_list": [ 00:25:42.712 { 00:25:42.712 "name": "spare", 00:25:42.712 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:25:42.712 "is_configured": true, 00:25:42.712 "data_offset": 2048, 00:25:42.712 "data_size": 63488 00:25:42.712 }, 00:25:42.712 { 00:25:42.712 "name": "BaseBdev2", 00:25:42.712 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:42.712 "is_configured": true, 00:25:42.712 "data_offset": 2048, 00:25:42.712 "data_size": 63488 00:25:42.712 }, 00:25:42.712 { 00:25:42.712 "name": "BaseBdev3", 00:25:42.712 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:42.712 "is_configured": true, 00:25:42.712 "data_offset": 2048, 00:25:42.712 "data_size": 63488 00:25:42.712 }, 00:25:42.712 { 00:25:42.712 "name": "BaseBdev4", 00:25:42.712 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:42.712 "is_configured": true, 00:25:42.712 "data_offset": 2048, 00:25:42.712 "data_size": 63488 00:25:42.712 } 00:25:42.712 ] 00:25:42.712 }' 00:25:42.712 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:42.712 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:42.712 06:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:42.970 06:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:42.970 06:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:43.905 06:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:43.905 06:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:43.905 06:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:43.905 06:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:43.905 06:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:43.905 06:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:43.905 06:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.905 06:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.164 06:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:44.164 "name": "raid_bdev1", 00:25:44.164 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:44.164 "strip_size_kb": 64, 00:25:44.164 "state": "online", 00:25:44.164 "raid_level": "raid5f", 00:25:44.164 "superblock": true, 00:25:44.164 "num_base_bdevs": 4, 00:25:44.164 "num_base_bdevs_discovered": 4, 00:25:44.164 "num_base_bdevs_operational": 4, 00:25:44.164 "process": { 00:25:44.164 "type": "rebuild", 00:25:44.164 "target": "spare", 00:25:44.164 "progress": { 00:25:44.164 "blocks": 55680, 00:25:44.164 "percent": 29 00:25:44.164 } 00:25:44.164 }, 00:25:44.164 "base_bdevs_list": [ 00:25:44.164 { 00:25:44.164 "name": "spare", 00:25:44.164 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:25:44.164 "is_configured": true, 00:25:44.164 "data_offset": 2048, 00:25:44.164 "data_size": 63488 00:25:44.164 }, 00:25:44.164 { 00:25:44.164 "name": "BaseBdev2", 00:25:44.164 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:44.164 "is_configured": true, 00:25:44.164 "data_offset": 2048, 00:25:44.164 "data_size": 63488 00:25:44.164 }, 00:25:44.164 { 00:25:44.164 "name": "BaseBdev3", 00:25:44.164 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:44.164 "is_configured": true, 00:25:44.164 "data_offset": 2048, 00:25:44.164 "data_size": 63488 00:25:44.164 }, 00:25:44.164 { 00:25:44.164 "name": "BaseBdev4", 00:25:44.164 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:44.164 "is_configured": true, 00:25:44.164 "data_offset": 2048, 00:25:44.164 "data_size": 63488 00:25:44.164 } 00:25:44.164 ] 00:25:44.164 }' 00:25:44.164 06:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:44.164 06:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:44.164 06:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:44.164 06:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:44.164 06:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:45.542 06:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:45.542 06:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:45.542 06:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:45.542 06:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:45.542 06:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:45.542 06:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:45.542 06:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.542 06:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.542 06:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:45.542 "name": "raid_bdev1", 00:25:45.542 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:45.542 "strip_size_kb": 64, 00:25:45.542 "state": "online", 00:25:45.542 "raid_level": "raid5f", 00:25:45.542 "superblock": true, 00:25:45.542 "num_base_bdevs": 4, 00:25:45.542 "num_base_bdevs_discovered": 4, 00:25:45.542 "num_base_bdevs_operational": 4, 00:25:45.542 "process": { 00:25:45.542 "type": "rebuild", 00:25:45.542 "target": "spare", 00:25:45.542 "progress": { 00:25:45.542 "blocks": 80640, 00:25:45.542 "percent": 42 00:25:45.542 } 00:25:45.542 }, 00:25:45.542 "base_bdevs_list": [ 00:25:45.542 { 00:25:45.542 "name": "spare", 00:25:45.542 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:25:45.542 "is_configured": true, 00:25:45.542 "data_offset": 2048, 00:25:45.542 "data_size": 63488 00:25:45.542 }, 00:25:45.542 { 00:25:45.542 "name": "BaseBdev2", 00:25:45.542 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:45.542 "is_configured": true, 00:25:45.542 "data_offset": 2048, 00:25:45.542 "data_size": 63488 00:25:45.542 }, 00:25:45.542 { 00:25:45.542 "name": "BaseBdev3", 00:25:45.542 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:45.542 "is_configured": true, 00:25:45.542 "data_offset": 2048, 00:25:45.542 "data_size": 63488 00:25:45.542 }, 00:25:45.542 { 00:25:45.542 "name": "BaseBdev4", 00:25:45.542 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:45.542 "is_configured": true, 00:25:45.542 "data_offset": 2048, 00:25:45.542 "data_size": 63488 00:25:45.543 } 00:25:45.543 ] 00:25:45.543 }' 00:25:45.543 06:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:45.543 06:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:45.543 06:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:45.543 06:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:45.543 06:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:46.480 06:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:46.480 06:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:46.480 06:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:46.480 06:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:46.480 06:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:46.480 06:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:46.480 06:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.480 06:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.736 06:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:46.736 "name": "raid_bdev1", 00:25:46.736 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:46.736 "strip_size_kb": 64, 00:25:46.736 "state": "online", 00:25:46.736 "raid_level": "raid5f", 00:25:46.736 "superblock": true, 00:25:46.736 "num_base_bdevs": 4, 00:25:46.736 "num_base_bdevs_discovered": 4, 00:25:46.736 "num_base_bdevs_operational": 4, 00:25:46.736 "process": { 00:25:46.736 "type": "rebuild", 00:25:46.736 "target": "spare", 00:25:46.736 "progress": { 00:25:46.736 "blocks": 107520, 00:25:46.736 "percent": 56 00:25:46.736 } 00:25:46.736 }, 00:25:46.736 "base_bdevs_list": [ 00:25:46.736 { 00:25:46.736 "name": "spare", 00:25:46.736 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:25:46.736 "is_configured": true, 00:25:46.736 "data_offset": 2048, 00:25:46.736 "data_size": 63488 00:25:46.736 }, 00:25:46.736 { 00:25:46.736 "name": "BaseBdev2", 00:25:46.736 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:46.736 "is_configured": true, 00:25:46.736 "data_offset": 2048, 00:25:46.736 "data_size": 63488 00:25:46.736 }, 00:25:46.736 { 00:25:46.736 "name": "BaseBdev3", 00:25:46.736 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:46.736 "is_configured": true, 00:25:46.736 "data_offset": 2048, 00:25:46.736 "data_size": 63488 00:25:46.736 }, 00:25:46.736 { 00:25:46.736 "name": "BaseBdev4", 00:25:46.736 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:46.736 "is_configured": true, 00:25:46.736 "data_offset": 2048, 00:25:46.736 "data_size": 63488 00:25:46.736 } 00:25:46.736 ] 00:25:46.736 }' 00:25:46.736 06:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:46.996 06:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:46.996 06:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:46.996 06:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:46.996 06:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:47.933 06:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:47.933 06:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:47.933 06:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:47.933 06:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:47.933 06:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:47.933 06:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:47.933 06:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.933 06:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.191 06:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:48.191 "name": "raid_bdev1", 00:25:48.191 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:48.191 "strip_size_kb": 64, 00:25:48.191 "state": "online", 00:25:48.191 "raid_level": "raid5f", 00:25:48.191 "superblock": true, 00:25:48.191 "num_base_bdevs": 4, 00:25:48.191 "num_base_bdevs_discovered": 4, 00:25:48.191 "num_base_bdevs_operational": 4, 00:25:48.191 "process": { 00:25:48.191 "type": "rebuild", 00:25:48.191 "target": "spare", 00:25:48.191 "progress": { 00:25:48.191 "blocks": 134400, 00:25:48.191 "percent": 70 00:25:48.191 } 00:25:48.191 }, 00:25:48.191 "base_bdevs_list": [ 00:25:48.191 { 00:25:48.191 "name": "spare", 00:25:48.191 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:25:48.191 "is_configured": true, 00:25:48.191 "data_offset": 2048, 00:25:48.191 "data_size": 63488 00:25:48.191 }, 00:25:48.191 { 00:25:48.191 "name": "BaseBdev2", 00:25:48.191 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:48.191 "is_configured": true, 00:25:48.191 "data_offset": 2048, 00:25:48.191 "data_size": 63488 00:25:48.191 }, 00:25:48.191 { 00:25:48.191 "name": "BaseBdev3", 00:25:48.191 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:48.191 "is_configured": true, 00:25:48.191 "data_offset": 2048, 00:25:48.191 "data_size": 63488 00:25:48.191 }, 00:25:48.191 { 00:25:48.191 "name": "BaseBdev4", 00:25:48.191 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:48.191 "is_configured": true, 00:25:48.191 "data_offset": 2048, 00:25:48.191 "data_size": 63488 00:25:48.191 } 00:25:48.191 ] 00:25:48.191 }' 00:25:48.191 06:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:48.191 06:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:48.191 06:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:48.450 06:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:48.450 06:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:49.386 06:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:49.386 06:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:49.386 06:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:49.386 06:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:49.386 06:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:49.386 06:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:49.386 06:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.386 06:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.646 06:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:49.646 "name": "raid_bdev1", 00:25:49.646 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:49.646 "strip_size_kb": 64, 00:25:49.646 "state": "online", 00:25:49.646 "raid_level": "raid5f", 00:25:49.646 "superblock": true, 00:25:49.646 "num_base_bdevs": 4, 00:25:49.646 "num_base_bdevs_discovered": 4, 00:25:49.646 "num_base_bdevs_operational": 4, 00:25:49.646 "process": { 00:25:49.646 "type": "rebuild", 00:25:49.646 "target": "spare", 00:25:49.646 "progress": { 00:25:49.646 "blocks": 159360, 00:25:49.646 "percent": 83 00:25:49.646 } 00:25:49.646 }, 00:25:49.646 "base_bdevs_list": [ 00:25:49.646 { 00:25:49.646 "name": "spare", 00:25:49.646 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:25:49.646 "is_configured": true, 00:25:49.646 "data_offset": 2048, 00:25:49.646 "data_size": 63488 00:25:49.646 }, 00:25:49.646 { 00:25:49.646 "name": "BaseBdev2", 00:25:49.646 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:49.646 "is_configured": true, 00:25:49.646 "data_offset": 2048, 00:25:49.646 "data_size": 63488 00:25:49.646 }, 00:25:49.646 { 00:25:49.646 "name": "BaseBdev3", 00:25:49.646 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:49.646 "is_configured": true, 00:25:49.646 "data_offset": 2048, 00:25:49.646 "data_size": 63488 00:25:49.646 }, 00:25:49.646 { 00:25:49.646 "name": "BaseBdev4", 00:25:49.646 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:49.646 "is_configured": true, 00:25:49.646 "data_offset": 2048, 00:25:49.646 "data_size": 63488 00:25:49.646 } 00:25:49.646 ] 00:25:49.646 }' 00:25:49.646 06:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:49.646 06:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:49.646 06:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:49.646 06:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:49.646 06:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:50.585 06:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:50.585 06:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:50.585 06:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:50.585 06:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:50.585 06:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:50.585 06:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:50.585 06:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.585 06:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.843 06:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:50.843 "name": "raid_bdev1", 00:25:50.843 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:50.843 "strip_size_kb": 64, 00:25:50.843 "state": "online", 00:25:50.843 "raid_level": "raid5f", 00:25:50.843 "superblock": true, 00:25:50.843 "num_base_bdevs": 4, 00:25:50.843 "num_base_bdevs_discovered": 4, 00:25:50.843 "num_base_bdevs_operational": 4, 00:25:50.843 "process": { 00:25:50.843 "type": "rebuild", 00:25:50.843 "target": "spare", 00:25:50.843 "progress": { 00:25:50.843 "blocks": 184320, 00:25:50.843 "percent": 96 00:25:50.843 } 00:25:50.843 }, 00:25:50.843 "base_bdevs_list": [ 00:25:50.843 { 00:25:50.843 "name": "spare", 00:25:50.843 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:25:50.843 "is_configured": true, 00:25:50.843 "data_offset": 2048, 00:25:50.843 "data_size": 63488 00:25:50.843 }, 00:25:50.843 { 00:25:50.843 "name": "BaseBdev2", 00:25:50.843 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:50.843 "is_configured": true, 00:25:50.843 "data_offset": 2048, 00:25:50.843 "data_size": 63488 00:25:50.843 }, 00:25:50.843 { 00:25:50.843 "name": "BaseBdev3", 00:25:50.843 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:50.843 "is_configured": true, 00:25:50.843 "data_offset": 2048, 00:25:50.843 "data_size": 63488 00:25:50.843 }, 00:25:50.843 { 00:25:50.843 "name": "BaseBdev4", 00:25:50.843 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:50.843 "is_configured": true, 00:25:50.843 "data_offset": 2048, 00:25:50.843 "data_size": 63488 00:25:50.843 } 00:25:50.843 ] 00:25:50.843 }' 00:25:50.843 06:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:51.103 06:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:51.103 06:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:51.103 06:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:51.104 06:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:51.104 [2024-08-14 06:57:18.356822] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:51.104 [2024-08-14 06:57:18.356931] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:51.104 [2024-08-14 06:57:18.357129] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:52.040 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:52.040 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:52.040 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:52.040 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:52.040 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:52.040 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:52.040 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.040 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.298 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:52.298 "name": "raid_bdev1", 00:25:52.298 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:52.298 "strip_size_kb": 64, 00:25:52.298 "state": "online", 00:25:52.298 "raid_level": "raid5f", 00:25:52.298 "superblock": true, 00:25:52.298 "num_base_bdevs": 4, 00:25:52.298 "num_base_bdevs_discovered": 4, 00:25:52.298 "num_base_bdevs_operational": 4, 00:25:52.298 "base_bdevs_list": [ 00:25:52.298 { 00:25:52.298 "name": "spare", 00:25:52.298 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:25:52.298 "is_configured": true, 00:25:52.298 "data_offset": 2048, 00:25:52.298 "data_size": 63488 00:25:52.298 }, 00:25:52.298 { 00:25:52.298 "name": "BaseBdev2", 00:25:52.298 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:52.298 "is_configured": true, 00:25:52.298 "data_offset": 2048, 00:25:52.298 "data_size": 63488 00:25:52.298 }, 00:25:52.298 { 00:25:52.298 "name": "BaseBdev3", 00:25:52.298 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:52.298 "is_configured": true, 00:25:52.298 "data_offset": 2048, 00:25:52.298 "data_size": 63488 00:25:52.298 }, 00:25:52.298 { 00:25:52.298 "name": "BaseBdev4", 00:25:52.298 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:52.298 "is_configured": true, 00:25:52.298 "data_offset": 2048, 00:25:52.298 "data_size": 63488 00:25:52.298 } 00:25:52.298 ] 00:25:52.298 }' 00:25:52.298 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:52.299 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:52.299 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:52.299 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:52.299 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:25:52.299 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:52.299 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:52.299 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:52.299 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:52.299 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:52.299 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.299 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.562 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:52.562 "name": "raid_bdev1", 00:25:52.562 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:52.562 "strip_size_kb": 64, 00:25:52.562 "state": "online", 00:25:52.562 "raid_level": "raid5f", 00:25:52.562 "superblock": true, 00:25:52.562 "num_base_bdevs": 4, 00:25:52.562 "num_base_bdevs_discovered": 4, 00:25:52.562 "num_base_bdevs_operational": 4, 00:25:52.562 "base_bdevs_list": [ 00:25:52.562 { 00:25:52.562 "name": "spare", 00:25:52.562 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:25:52.562 "is_configured": true, 00:25:52.562 "data_offset": 2048, 00:25:52.562 "data_size": 63488 00:25:52.562 }, 00:25:52.562 { 00:25:52.562 "name": "BaseBdev2", 00:25:52.562 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:52.562 "is_configured": true, 00:25:52.562 "data_offset": 2048, 00:25:52.562 "data_size": 63488 00:25:52.562 }, 00:25:52.562 { 00:25:52.562 "name": "BaseBdev3", 00:25:52.562 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:52.562 "is_configured": true, 00:25:52.562 "data_offset": 2048, 00:25:52.562 "data_size": 63488 00:25:52.562 }, 00:25:52.562 { 00:25:52.562 "name": "BaseBdev4", 00:25:52.562 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:52.562 "is_configured": true, 00:25:52.562 "data_offset": 2048, 00:25:52.562 "data_size": 63488 00:25:52.562 } 00:25:52.562 ] 00:25:52.562 }' 00:25:52.562 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:52.820 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:52.820 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:52.820 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:52.820 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:52.820 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:52.820 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:52.820 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:52.820 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:52.820 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:52.820 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:52.820 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:52.820 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:52.820 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:52.820 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.820 06:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.078 06:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:53.078 "name": "raid_bdev1", 00:25:53.078 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:53.078 "strip_size_kb": 64, 00:25:53.078 "state": "online", 00:25:53.078 "raid_level": "raid5f", 00:25:53.078 "superblock": true, 00:25:53.078 "num_base_bdevs": 4, 00:25:53.078 "num_base_bdevs_discovered": 4, 00:25:53.078 "num_base_bdevs_operational": 4, 00:25:53.078 "base_bdevs_list": [ 00:25:53.078 { 00:25:53.078 "name": "spare", 00:25:53.078 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:25:53.078 "is_configured": true, 00:25:53.078 "data_offset": 2048, 00:25:53.078 "data_size": 63488 00:25:53.078 }, 00:25:53.078 { 00:25:53.078 "name": "BaseBdev2", 00:25:53.078 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:53.078 "is_configured": true, 00:25:53.078 "data_offset": 2048, 00:25:53.078 "data_size": 63488 00:25:53.078 }, 00:25:53.078 { 00:25:53.078 "name": "BaseBdev3", 00:25:53.078 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:53.078 "is_configured": true, 00:25:53.078 "data_offset": 2048, 00:25:53.078 "data_size": 63488 00:25:53.078 }, 00:25:53.078 { 00:25:53.078 "name": "BaseBdev4", 00:25:53.078 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:53.078 "is_configured": true, 00:25:53.078 "data_offset": 2048, 00:25:53.078 "data_size": 63488 00:25:53.078 } 00:25:53.078 ] 00:25:53.078 }' 00:25:53.078 06:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:53.078 06:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.645 06:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:53.903 [2024-08-14 06:57:20.975002] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:53.903 [2024-08-14 06:57:20.975051] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:53.903 [2024-08-14 06:57:20.975196] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:53.903 [2024-08-14 06:57:20.975321] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:53.903 [2024-08-14 06:57:20.975333] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:25:53.903 06:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.903 06:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:25:54.162 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:25:54.162 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:25:54.162 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:25:54.162 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:54.162 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:54.162 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:54.162 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:54.162 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:54.162 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:54.162 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:54.162 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:54.162 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:54.162 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:54.419 /dev/nbd0 00:25:54.419 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:54.419 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:54.419 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:25:54.419 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:25:54.419 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:25:54.419 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:25:54.419 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:25:54.419 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:25:54.419 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:25:54.419 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:25:54.419 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:54.420 1+0 records in 00:25:54.420 1+0 records out 00:25:54.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405384 s, 10.1 MB/s 00:25:54.420 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.420 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:25:54.420 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.420 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:25:54.420 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:25:54.420 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:54.420 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:54.420 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:54.678 /dev/nbd1 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:54.678 1+0 records in 00:25:54.678 1+0 records out 00:25:54.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432857 s, 9.5 MB/s 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:54.678 06:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:54.937 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:54.937 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:54.937 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:54.937 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:54.937 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:54.937 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:54.937 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:54.937 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:54.937 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:54.937 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:55.195 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:55.195 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:55.195 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:55.195 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:55.195 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:55.195 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:55.195 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:55.195 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:55.195 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:25:55.195 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:55.454 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:55.712 [2024-08-14 06:57:22.711546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:55.712 [2024-08-14 06:57:22.711630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:55.712 [2024-08-14 06:57:22.711657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:55.712 [2024-08-14 06:57:22.711668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:55.712 [2024-08-14 06:57:22.714135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:55.712 [2024-08-14 06:57:22.714189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:55.712 [2024-08-14 06:57:22.714323] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:55.712 [2024-08-14 06:57:22.714367] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:55.712 [2024-08-14 06:57:22.714512] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:55.712 [2024-08-14 06:57:22.714658] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:55.712 [2024-08-14 06:57:22.714758] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:55.712 spare 00:25:55.712 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:55.712 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:55.712 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:55.712 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:55.712 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:55.712 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:55.712 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:55.712 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:55.712 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:55.712 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:55.713 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.713 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.713 [2024-08-14 06:57:22.814681] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:25:55.713 [2024-08-14 06:57:22.814738] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:55.713 [2024-08-14 06:57:22.815115] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045820 00:25:55.713 [2024-08-14 06:57:22.815734] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:25:55.713 [2024-08-14 06:57:22.815757] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:25:55.713 [2024-08-14 06:57:22.815947] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:55.971 06:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:55.971 "name": "raid_bdev1", 00:25:55.971 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:55.971 "strip_size_kb": 64, 00:25:55.971 "state": "online", 00:25:55.971 "raid_level": "raid5f", 00:25:55.971 "superblock": true, 00:25:55.971 "num_base_bdevs": 4, 00:25:55.971 "num_base_bdevs_discovered": 4, 00:25:55.971 "num_base_bdevs_operational": 4, 00:25:55.971 "base_bdevs_list": [ 00:25:55.971 { 00:25:55.971 "name": "spare", 00:25:55.971 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:25:55.971 "is_configured": true, 00:25:55.971 "data_offset": 2048, 00:25:55.971 "data_size": 63488 00:25:55.971 }, 00:25:55.971 { 00:25:55.971 "name": "BaseBdev2", 00:25:55.971 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:55.971 "is_configured": true, 00:25:55.971 "data_offset": 2048, 00:25:55.971 "data_size": 63488 00:25:55.971 }, 00:25:55.971 { 00:25:55.971 "name": "BaseBdev3", 00:25:55.971 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:55.971 "is_configured": true, 00:25:55.971 "data_offset": 2048, 00:25:55.971 "data_size": 63488 00:25:55.971 }, 00:25:55.971 { 00:25:55.971 "name": "BaseBdev4", 00:25:55.971 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:55.971 "is_configured": true, 00:25:55.971 "data_offset": 2048, 00:25:55.971 "data_size": 63488 00:25:55.971 } 00:25:55.971 ] 00:25:55.971 }' 00:25:55.971 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:55.971 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.537 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:56.537 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:56.537 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:56.537 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:56.537 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:56.537 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.537 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.795 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:56.795 "name": "raid_bdev1", 00:25:56.795 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:56.795 "strip_size_kb": 64, 00:25:56.795 "state": "online", 00:25:56.795 "raid_level": "raid5f", 00:25:56.795 "superblock": true, 00:25:56.795 "num_base_bdevs": 4, 00:25:56.795 "num_base_bdevs_discovered": 4, 00:25:56.795 "num_base_bdevs_operational": 4, 00:25:56.795 "base_bdevs_list": [ 00:25:56.795 { 00:25:56.795 "name": "spare", 00:25:56.795 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:25:56.795 "is_configured": true, 00:25:56.795 "data_offset": 2048, 00:25:56.795 "data_size": 63488 00:25:56.795 }, 00:25:56.795 { 00:25:56.795 "name": "BaseBdev2", 00:25:56.795 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:56.795 "is_configured": true, 00:25:56.795 "data_offset": 2048, 00:25:56.795 "data_size": 63488 00:25:56.795 }, 00:25:56.795 { 00:25:56.795 "name": "BaseBdev3", 00:25:56.795 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:56.795 "is_configured": true, 00:25:56.795 "data_offset": 2048, 00:25:56.795 "data_size": 63488 00:25:56.795 }, 00:25:56.795 { 00:25:56.795 "name": "BaseBdev4", 00:25:56.795 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:56.795 "is_configured": true, 00:25:56.795 "data_offset": 2048, 00:25:56.795 "data_size": 63488 00:25:56.795 } 00:25:56.795 ] 00:25:56.795 }' 00:25:56.795 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:56.795 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:56.795 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:56.795 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:56.795 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.795 06:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:57.053 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:25:57.053 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:57.311 [2024-08-14 06:57:24.377546] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:57.311 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:57.311 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:57.311 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:57.311 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:57.311 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:57.311 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:57.311 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:57.311 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:57.311 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:57.311 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:57.311 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.311 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.569 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.569 "name": "raid_bdev1", 00:25:57.569 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:57.569 "strip_size_kb": 64, 00:25:57.569 "state": "online", 00:25:57.569 "raid_level": "raid5f", 00:25:57.569 "superblock": true, 00:25:57.569 "num_base_bdevs": 4, 00:25:57.569 "num_base_bdevs_discovered": 3, 00:25:57.569 "num_base_bdevs_operational": 3, 00:25:57.569 "base_bdevs_list": [ 00:25:57.569 { 00:25:57.569 "name": null, 00:25:57.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.569 "is_configured": false, 00:25:57.569 "data_offset": 2048, 00:25:57.569 "data_size": 63488 00:25:57.569 }, 00:25:57.569 { 00:25:57.569 "name": "BaseBdev2", 00:25:57.569 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:57.569 "is_configured": true, 00:25:57.569 "data_offset": 2048, 00:25:57.569 "data_size": 63488 00:25:57.569 }, 00:25:57.569 { 00:25:57.569 "name": "BaseBdev3", 00:25:57.569 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:57.569 "is_configured": true, 00:25:57.569 "data_offset": 2048, 00:25:57.569 "data_size": 63488 00:25:57.569 }, 00:25:57.569 { 00:25:57.569 "name": "BaseBdev4", 00:25:57.569 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:57.569 "is_configured": true, 00:25:57.569 "data_offset": 2048, 00:25:57.569 "data_size": 63488 00:25:57.569 } 00:25:57.569 ] 00:25:57.569 }' 00:25:57.569 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.569 06:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.136 06:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:58.394 [2024-08-14 06:57:25.435813] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:58.394 [2024-08-14 06:57:25.436013] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:58.394 [2024-08-14 06:57:25.436034] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:58.394 [2024-08-14 06:57:25.436084] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:58.394 [2024-08-14 06:57:25.439395] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000458f0 00:25:58.394 [2024-08-14 06:57:25.441897] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:58.394 06:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:25:59.328 06:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:59.328 06:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:59.328 06:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:59.328 06:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:59.328 06:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:59.328 06:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.328 06:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.591 06:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:59.591 "name": "raid_bdev1", 00:25:59.591 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:25:59.591 "strip_size_kb": 64, 00:25:59.591 "state": "online", 00:25:59.591 "raid_level": "raid5f", 00:25:59.591 "superblock": true, 00:25:59.591 "num_base_bdevs": 4, 00:25:59.591 "num_base_bdevs_discovered": 4, 00:25:59.591 "num_base_bdevs_operational": 4, 00:25:59.591 "process": { 00:25:59.591 "type": "rebuild", 00:25:59.591 "target": "spare", 00:25:59.591 "progress": { 00:25:59.591 "blocks": 23040, 00:25:59.591 "percent": 12 00:25:59.591 } 00:25:59.591 }, 00:25:59.591 "base_bdevs_list": [ 00:25:59.591 { 00:25:59.591 "name": "spare", 00:25:59.591 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:25:59.591 "is_configured": true, 00:25:59.591 "data_offset": 2048, 00:25:59.591 "data_size": 63488 00:25:59.591 }, 00:25:59.591 { 00:25:59.591 "name": "BaseBdev2", 00:25:59.591 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:25:59.591 "is_configured": true, 00:25:59.591 "data_offset": 2048, 00:25:59.591 "data_size": 63488 00:25:59.591 }, 00:25:59.591 { 00:25:59.591 "name": "BaseBdev3", 00:25:59.591 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:25:59.591 "is_configured": true, 00:25:59.591 "data_offset": 2048, 00:25:59.591 "data_size": 63488 00:25:59.591 }, 00:25:59.591 { 00:25:59.591 "name": "BaseBdev4", 00:25:59.591 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:25:59.591 "is_configured": true, 00:25:59.591 "data_offset": 2048, 00:25:59.591 "data_size": 63488 00:25:59.591 } 00:25:59.591 ] 00:25:59.591 }' 00:25:59.591 06:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:59.591 06:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:59.591 06:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:59.591 06:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:59.591 06:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:59.857 [2024-08-14 06:57:27.025014] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:59.857 [2024-08-14 06:57:27.053009] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:59.857 [2024-08-14 06:57:27.053100] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:59.857 [2024-08-14 06:57:27.053149] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:59.858 [2024-08-14 06:57:27.053163] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:59.858 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:59.858 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:59.858 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:59.858 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:59.858 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:59.858 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:59.858 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:59.858 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:59.858 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:59.858 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:59.858 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.858 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.116 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:00.116 "name": "raid_bdev1", 00:26:00.116 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:26:00.116 "strip_size_kb": 64, 00:26:00.116 "state": "online", 00:26:00.116 "raid_level": "raid5f", 00:26:00.116 "superblock": true, 00:26:00.116 "num_base_bdevs": 4, 00:26:00.116 "num_base_bdevs_discovered": 3, 00:26:00.116 "num_base_bdevs_operational": 3, 00:26:00.116 "base_bdevs_list": [ 00:26:00.116 { 00:26:00.116 "name": null, 00:26:00.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.116 "is_configured": false, 00:26:00.116 "data_offset": 2048, 00:26:00.116 "data_size": 63488 00:26:00.116 }, 00:26:00.116 { 00:26:00.116 "name": "BaseBdev2", 00:26:00.116 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:26:00.116 "is_configured": true, 00:26:00.116 "data_offset": 2048, 00:26:00.116 "data_size": 63488 00:26:00.116 }, 00:26:00.116 { 00:26:00.116 "name": "BaseBdev3", 00:26:00.116 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:26:00.116 "is_configured": true, 00:26:00.116 "data_offset": 2048, 00:26:00.116 "data_size": 63488 00:26:00.116 }, 00:26:00.116 { 00:26:00.116 "name": "BaseBdev4", 00:26:00.116 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:26:00.116 "is_configured": true, 00:26:00.116 "data_offset": 2048, 00:26:00.116 "data_size": 63488 00:26:00.116 } 00:26:00.116 ] 00:26:00.116 }' 00:26:00.116 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:00.116 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:00.683 06:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:00.942 [2024-08-14 06:57:28.153025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:00.942 [2024-08-14 06:57:28.153123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.942 [2024-08-14 06:57:28.153147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:26:00.942 [2024-08-14 06:57:28.153160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.942 [2024-08-14 06:57:28.153675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.942 [2024-08-14 06:57:28.153710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:00.942 [2024-08-14 06:57:28.153809] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:00.942 [2024-08-14 06:57:28.153826] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:00.942 [2024-08-14 06:57:28.153837] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:00.942 [2024-08-14 06:57:28.153869] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:00.942 [2024-08-14 06:57:28.157268] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000459c0 00:26:00.942 spare 00:26:00.942 [2024-08-14 06:57:28.159797] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:00.942 06:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:26:02.317 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:02.317 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:02.317 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:02.317 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:02.317 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:02.317 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.317 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.317 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:02.317 "name": "raid_bdev1", 00:26:02.317 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:26:02.317 "strip_size_kb": 64, 00:26:02.317 "state": "online", 00:26:02.317 "raid_level": "raid5f", 00:26:02.317 "superblock": true, 00:26:02.317 "num_base_bdevs": 4, 00:26:02.318 "num_base_bdevs_discovered": 4, 00:26:02.318 "num_base_bdevs_operational": 4, 00:26:02.318 "process": { 00:26:02.318 "type": "rebuild", 00:26:02.318 "target": "spare", 00:26:02.318 "progress": { 00:26:02.318 "blocks": 23040, 00:26:02.318 "percent": 12 00:26:02.318 } 00:26:02.318 }, 00:26:02.318 "base_bdevs_list": [ 00:26:02.318 { 00:26:02.318 "name": "spare", 00:26:02.318 "uuid": "82038e30-74bf-5435-91a4-c2de32bb598c", 00:26:02.318 "is_configured": true, 00:26:02.318 "data_offset": 2048, 00:26:02.318 "data_size": 63488 00:26:02.318 }, 00:26:02.318 { 00:26:02.318 "name": "BaseBdev2", 00:26:02.318 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:26:02.318 "is_configured": true, 00:26:02.318 "data_offset": 2048, 00:26:02.318 "data_size": 63488 00:26:02.318 }, 00:26:02.318 { 00:26:02.318 "name": "BaseBdev3", 00:26:02.318 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:26:02.318 "is_configured": true, 00:26:02.318 "data_offset": 2048, 00:26:02.318 "data_size": 63488 00:26:02.318 }, 00:26:02.318 { 00:26:02.318 "name": "BaseBdev4", 00:26:02.318 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:26:02.318 "is_configured": true, 00:26:02.318 "data_offset": 2048, 00:26:02.318 "data_size": 63488 00:26:02.318 } 00:26:02.318 ] 00:26:02.318 }' 00:26:02.318 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:02.318 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:02.318 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:02.318 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:02.318 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:02.577 [2024-08-14 06:57:29.708381] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:02.577 [2024-08-14 06:57:29.771918] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:02.577 [2024-08-14 06:57:29.772027] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:02.577 [2024-08-14 06:57:29.772054] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:02.577 [2024-08-14 06:57:29.772062] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:02.577 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:02.577 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:02.577 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:02.577 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:02.577 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:02.577 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:02.577 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:02.577 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:02.577 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:02.577 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:02.577 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.577 06:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.836 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:02.836 "name": "raid_bdev1", 00:26:02.836 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:26:02.836 "strip_size_kb": 64, 00:26:02.836 "state": "online", 00:26:02.836 "raid_level": "raid5f", 00:26:02.836 "superblock": true, 00:26:02.836 "num_base_bdevs": 4, 00:26:02.836 "num_base_bdevs_discovered": 3, 00:26:02.836 "num_base_bdevs_operational": 3, 00:26:02.836 "base_bdevs_list": [ 00:26:02.836 { 00:26:02.836 "name": null, 00:26:02.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.836 "is_configured": false, 00:26:02.836 "data_offset": 2048, 00:26:02.836 "data_size": 63488 00:26:02.836 }, 00:26:02.836 { 00:26:02.836 "name": "BaseBdev2", 00:26:02.836 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:26:02.836 "is_configured": true, 00:26:02.836 "data_offset": 2048, 00:26:02.836 "data_size": 63488 00:26:02.836 }, 00:26:02.836 { 00:26:02.836 "name": "BaseBdev3", 00:26:02.836 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:26:02.836 "is_configured": true, 00:26:02.836 "data_offset": 2048, 00:26:02.836 "data_size": 63488 00:26:02.836 }, 00:26:02.836 { 00:26:02.836 "name": "BaseBdev4", 00:26:02.836 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:26:02.836 "is_configured": true, 00:26:02.836 "data_offset": 2048, 00:26:02.836 "data_size": 63488 00:26:02.836 } 00:26:02.836 ] 00:26:02.836 }' 00:26:02.836 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:02.836 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.405 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:03.405 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:03.405 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:03.405 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:03.405 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:03.405 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.405 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.666 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:03.666 "name": "raid_bdev1", 00:26:03.666 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:26:03.666 "strip_size_kb": 64, 00:26:03.666 "state": "online", 00:26:03.666 "raid_level": "raid5f", 00:26:03.666 "superblock": true, 00:26:03.666 "num_base_bdevs": 4, 00:26:03.666 "num_base_bdevs_discovered": 3, 00:26:03.666 "num_base_bdevs_operational": 3, 00:26:03.666 "base_bdevs_list": [ 00:26:03.666 { 00:26:03.666 "name": null, 00:26:03.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.666 "is_configured": false, 00:26:03.666 "data_offset": 2048, 00:26:03.666 "data_size": 63488 00:26:03.666 }, 00:26:03.666 { 00:26:03.666 "name": "BaseBdev2", 00:26:03.666 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:26:03.666 "is_configured": true, 00:26:03.666 "data_offset": 2048, 00:26:03.666 "data_size": 63488 00:26:03.666 }, 00:26:03.666 { 00:26:03.666 "name": "BaseBdev3", 00:26:03.666 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:26:03.666 "is_configured": true, 00:26:03.666 "data_offset": 2048, 00:26:03.666 "data_size": 63488 00:26:03.666 }, 00:26:03.666 { 00:26:03.666 "name": "BaseBdev4", 00:26:03.666 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:26:03.666 "is_configured": true, 00:26:03.666 "data_offset": 2048, 00:26:03.666 "data_size": 63488 00:26:03.666 } 00:26:03.666 ] 00:26:03.666 }' 00:26:03.666 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:03.666 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:03.666 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:03.926 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:03.926 06:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:03.926 06:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:04.185 [2024-08-14 06:57:31.319030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:04.185 [2024-08-14 06:57:31.319228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:04.185 [2024-08-14 06:57:31.319278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:26:04.185 [2024-08-14 06:57:31.319318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:04.185 [2024-08-14 06:57:31.319802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:04.185 [2024-08-14 06:57:31.319865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:04.185 [2024-08-14 06:57:31.319982] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:04.185 [2024-08-14 06:57:31.320026] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:04.185 [2024-08-14 06:57:31.320075] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:04.185 BaseBdev1 00:26:04.185 06:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:26:05.123 06:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:05.123 06:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:05.123 06:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:05.123 06:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:05.123 06:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:05.123 06:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:05.123 06:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:05.123 06:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:05.123 06:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:05.123 06:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:05.123 06:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.123 06:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.382 06:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:05.382 "name": "raid_bdev1", 00:26:05.382 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:26:05.382 "strip_size_kb": 64, 00:26:05.382 "state": "online", 00:26:05.382 "raid_level": "raid5f", 00:26:05.382 "superblock": true, 00:26:05.382 "num_base_bdevs": 4, 00:26:05.382 "num_base_bdevs_discovered": 3, 00:26:05.382 "num_base_bdevs_operational": 3, 00:26:05.382 "base_bdevs_list": [ 00:26:05.382 { 00:26:05.382 "name": null, 00:26:05.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.382 "is_configured": false, 00:26:05.382 "data_offset": 2048, 00:26:05.382 "data_size": 63488 00:26:05.382 }, 00:26:05.382 { 00:26:05.382 "name": "BaseBdev2", 00:26:05.382 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:26:05.382 "is_configured": true, 00:26:05.382 "data_offset": 2048, 00:26:05.382 "data_size": 63488 00:26:05.382 }, 00:26:05.382 { 00:26:05.382 "name": "BaseBdev3", 00:26:05.382 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:26:05.382 "is_configured": true, 00:26:05.382 "data_offset": 2048, 00:26:05.382 "data_size": 63488 00:26:05.382 }, 00:26:05.382 { 00:26:05.383 "name": "BaseBdev4", 00:26:05.383 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:26:05.383 "is_configured": true, 00:26:05.383 "data_offset": 2048, 00:26:05.383 "data_size": 63488 00:26:05.383 } 00:26:05.383 ] 00:26:05.383 }' 00:26:05.383 06:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:05.383 06:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.968 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:05.968 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:05.968 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:05.968 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:05.968 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:05.968 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.968 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:06.248 "name": "raid_bdev1", 00:26:06.248 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:26:06.248 "strip_size_kb": 64, 00:26:06.248 "state": "online", 00:26:06.248 "raid_level": "raid5f", 00:26:06.248 "superblock": true, 00:26:06.248 "num_base_bdevs": 4, 00:26:06.248 "num_base_bdevs_discovered": 3, 00:26:06.248 "num_base_bdevs_operational": 3, 00:26:06.248 "base_bdevs_list": [ 00:26:06.248 { 00:26:06.248 "name": null, 00:26:06.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.248 "is_configured": false, 00:26:06.248 "data_offset": 2048, 00:26:06.248 "data_size": 63488 00:26:06.248 }, 00:26:06.248 { 00:26:06.248 "name": "BaseBdev2", 00:26:06.248 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:26:06.248 "is_configured": true, 00:26:06.248 "data_offset": 2048, 00:26:06.248 "data_size": 63488 00:26:06.248 }, 00:26:06.248 { 00:26:06.248 "name": "BaseBdev3", 00:26:06.248 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:26:06.248 "is_configured": true, 00:26:06.248 "data_offset": 2048, 00:26:06.248 "data_size": 63488 00:26:06.248 }, 00:26:06.248 { 00:26:06.248 "name": "BaseBdev4", 00:26:06.248 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:26:06.248 "is_configured": true, 00:26:06.248 "data_offset": 2048, 00:26:06.248 "data_size": 63488 00:26:06.248 } 00:26:06.248 ] 00:26:06.248 }' 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@646 -- # local es=0 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:06.248 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:06.507 [2024-08-14 06:57:33.687118] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:06.507 [2024-08-14 06:57:33.687405] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:06.507 [2024-08-14 06:57:33.687486] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:06.507 request: 00:26:06.507 { 00:26:06.507 "base_bdev": "BaseBdev1", 00:26:06.507 "raid_bdev": "raid_bdev1", 00:26:06.507 "method": "bdev_raid_add_base_bdev", 00:26:06.507 "req_id": 1 00:26:06.507 } 00:26:06.507 Got JSON-RPC error response 00:26:06.507 response: 00:26:06.507 { 00:26:06.507 "code": -22, 00:26:06.507 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:06.507 } 00:26:06.507 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@649 -- # es=1 00:26:06.507 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:26:06.507 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:26:06.507 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:26:06.507 06:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:26:07.885 06:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:07.885 06:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:07.885 06:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:07.885 06:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:07.885 06:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:07.885 06:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:07.885 06:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:07.885 06:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:07.885 06:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:07.885 06:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:07.885 06:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.885 06:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.885 06:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:07.885 "name": "raid_bdev1", 00:26:07.885 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:26:07.885 "strip_size_kb": 64, 00:26:07.885 "state": "online", 00:26:07.885 "raid_level": "raid5f", 00:26:07.885 "superblock": true, 00:26:07.885 "num_base_bdevs": 4, 00:26:07.885 "num_base_bdevs_discovered": 3, 00:26:07.885 "num_base_bdevs_operational": 3, 00:26:07.885 "base_bdevs_list": [ 00:26:07.885 { 00:26:07.885 "name": null, 00:26:07.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.885 "is_configured": false, 00:26:07.885 "data_offset": 2048, 00:26:07.885 "data_size": 63488 00:26:07.885 }, 00:26:07.885 { 00:26:07.885 "name": "BaseBdev2", 00:26:07.885 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:26:07.885 "is_configured": true, 00:26:07.885 "data_offset": 2048, 00:26:07.885 "data_size": 63488 00:26:07.885 }, 00:26:07.885 { 00:26:07.885 "name": "BaseBdev3", 00:26:07.885 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:26:07.885 "is_configured": true, 00:26:07.885 "data_offset": 2048, 00:26:07.885 "data_size": 63488 00:26:07.885 }, 00:26:07.885 { 00:26:07.885 "name": "BaseBdev4", 00:26:07.885 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:26:07.885 "is_configured": true, 00:26:07.885 "data_offset": 2048, 00:26:07.885 "data_size": 63488 00:26:07.885 } 00:26:07.885 ] 00:26:07.885 }' 00:26:07.885 06:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:07.885 06:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.454 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:08.454 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:08.454 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:08.454 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:08.454 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:08.454 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.454 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:08.713 "name": "raid_bdev1", 00:26:08.713 "uuid": "bb29b26d-865a-42a7-ab4a-0a7fdaeca791", 00:26:08.713 "strip_size_kb": 64, 00:26:08.713 "state": "online", 00:26:08.713 "raid_level": "raid5f", 00:26:08.713 "superblock": true, 00:26:08.713 "num_base_bdevs": 4, 00:26:08.713 "num_base_bdevs_discovered": 3, 00:26:08.713 "num_base_bdevs_operational": 3, 00:26:08.713 "base_bdevs_list": [ 00:26:08.713 { 00:26:08.713 "name": null, 00:26:08.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.713 "is_configured": false, 00:26:08.713 "data_offset": 2048, 00:26:08.713 "data_size": 63488 00:26:08.713 }, 00:26:08.713 { 00:26:08.713 "name": "BaseBdev2", 00:26:08.713 "uuid": "5a6fba37-4c6a-5468-b171-5d4ceff4e7d1", 00:26:08.713 "is_configured": true, 00:26:08.713 "data_offset": 2048, 00:26:08.713 "data_size": 63488 00:26:08.713 }, 00:26:08.713 { 00:26:08.713 "name": "BaseBdev3", 00:26:08.713 "uuid": "e6dfb025-6e59-5a6e-b5cb-2de354aa57f6", 00:26:08.713 "is_configured": true, 00:26:08.713 "data_offset": 2048, 00:26:08.713 "data_size": 63488 00:26:08.713 }, 00:26:08.713 { 00:26:08.713 "name": "BaseBdev4", 00:26:08.713 "uuid": "ce897d0a-283d-57c1-8a3d-bbda1ece4fd0", 00:26:08.713 "is_configured": true, 00:26:08.713 "data_offset": 2048, 00:26:08.713 "data_size": 63488 00:26:08.713 } 00:26:08.713 ] 00:26:08.713 }' 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 105291 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 105291 ']' 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 105291 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 105291 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:08.713 killing process with pid 105291 00:26:08.713 Received shutdown signal, test time was about 60.000000 seconds 00:26:08.713 00:26:08.713 Latency(us) 00:26:08.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.713 =================================================================================================================== 00:26:08.713 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 105291' 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 105291 00:26:08.713 [2024-08-14 06:57:35.876270] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:08.713 [2024-08-14 06:57:35.876403] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:08.713 06:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 105291 00:26:08.713 [2024-08-14 06:57:35.876481] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:08.713 [2024-08-14 06:57:35.876492] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:26:08.713 [2024-08-14 06:57:35.927689] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:08.973 06:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:26:08.973 00:26:08.973 real 0m38.262s 00:26:08.973 user 0m58.553s 00:26:08.973 sys 0m4.797s 00:26:08.973 06:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:08.973 06:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.973 ************************************ 00:26:08.973 END TEST raid5f_rebuild_test_sb 00:26:08.973 ************************************ 00:26:08.973 06:57:36 bdev_raid -- bdev/bdev_raid.sh@974 -- # base_blocklen=4096 00:26:08.973 06:57:36 bdev_raid -- bdev/bdev_raid.sh@976 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:26:08.973 06:57:36 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:26:08.973 06:57:36 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:08.973 06:57:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:09.232 ************************************ 00:26:09.232 START TEST raid_state_function_test_sb_4k 00:26:09.232 ************************************ 00:26:09.232 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:26:09.232 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:09.232 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:26:09.232 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:09.232 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:09.232 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:09.232 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:09.232 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:26:09.232 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:09.232 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=106230 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 106230' 00:26:09.233 Process raid pid: 106230 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 106230 /var/tmp/spdk-raid.sock 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@827 -- # '[' -z 106230 ']' 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:09.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:09.233 06:57:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:09.233 [2024-08-14 06:57:36.326074] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:26:09.233 [2024-08-14 06:57:36.326327] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.233 [2024-08-14 06:57:36.475427] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.492 [2024-08-14 06:57:36.529721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.492 [2024-08-14 06:57:36.575136] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:09.492 [2024-08-14 06:57:36.575304] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:10.059 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:10.059 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # return 0 00:26:10.059 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:10.317 [2024-08-14 06:57:37.431395] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:10.317 [2024-08-14 06:57:37.431457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:10.317 [2024-08-14 06:57:37.431470] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:10.317 [2024-08-14 06:57:37.431480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:10.317 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:10.317 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:10.317 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:10.317 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:10.317 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:10.317 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:10.317 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:10.317 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:10.317 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:10.317 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:10.317 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.317 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.595 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:10.595 "name": "Existed_Raid", 00:26:10.595 "uuid": "daaf3959-ed16-46e0-b675-97c63740c195", 00:26:10.595 "strip_size_kb": 0, 00:26:10.595 "state": "configuring", 00:26:10.595 "raid_level": "raid1", 00:26:10.595 "superblock": true, 00:26:10.595 "num_base_bdevs": 2, 00:26:10.595 "num_base_bdevs_discovered": 0, 00:26:10.595 "num_base_bdevs_operational": 2, 00:26:10.595 "base_bdevs_list": [ 00:26:10.595 { 00:26:10.595 "name": "BaseBdev1", 00:26:10.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.595 "is_configured": false, 00:26:10.595 "data_offset": 0, 00:26:10.595 "data_size": 0 00:26:10.595 }, 00:26:10.595 { 00:26:10.595 "name": "BaseBdev2", 00:26:10.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.595 "is_configured": false, 00:26:10.595 "data_offset": 0, 00:26:10.595 "data_size": 0 00:26:10.595 } 00:26:10.595 ] 00:26:10.595 }' 00:26:10.595 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:10.595 06:57:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:11.212 06:57:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:11.470 [2024-08-14 06:57:38.469603] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:11.470 [2024-08-14 06:57:38.469730] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:26:11.470 06:57:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:11.470 [2024-08-14 06:57:38.681286] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:11.470 [2024-08-14 06:57:38.681340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:11.470 [2024-08-14 06:57:38.681362] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:11.470 [2024-08-14 06:57:38.681371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:11.470 06:57:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:26:11.729 [2024-08-14 06:57:38.901895] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:11.729 BaseBdev1 00:26:11.729 06:57:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:11.729 06:57:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:26:11.729 06:57:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:11.729 06:57:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:26:11.729 06:57:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:11.729 06:57:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:11.729 06:57:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:11.988 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:12.248 [ 00:26:12.248 { 00:26:12.248 "name": "BaseBdev1", 00:26:12.248 "aliases": [ 00:26:12.248 "350b161f-f34f-45e0-a326-97cef6e37ad0" 00:26:12.248 ], 00:26:12.248 "product_name": "Malloc disk", 00:26:12.248 "block_size": 4096, 00:26:12.248 "num_blocks": 8192, 00:26:12.248 "uuid": "350b161f-f34f-45e0-a326-97cef6e37ad0", 00:26:12.248 "assigned_rate_limits": { 00:26:12.248 "rw_ios_per_sec": 0, 00:26:12.248 "rw_mbytes_per_sec": 0, 00:26:12.248 "r_mbytes_per_sec": 0, 00:26:12.248 "w_mbytes_per_sec": 0 00:26:12.248 }, 00:26:12.248 "claimed": true, 00:26:12.248 "claim_type": "exclusive_write", 00:26:12.248 "zoned": false, 00:26:12.248 "supported_io_types": { 00:26:12.248 "read": true, 00:26:12.248 "write": true, 00:26:12.248 "unmap": true, 00:26:12.248 "flush": true, 00:26:12.248 "reset": true, 00:26:12.248 "nvme_admin": false, 00:26:12.248 "nvme_io": false, 00:26:12.248 "nvme_io_md": false, 00:26:12.248 "write_zeroes": true, 00:26:12.248 "zcopy": true, 00:26:12.248 "get_zone_info": false, 00:26:12.248 "zone_management": false, 00:26:12.248 "zone_append": false, 00:26:12.248 "compare": false, 00:26:12.248 "compare_and_write": false, 00:26:12.248 "abort": true, 00:26:12.248 "seek_hole": false, 00:26:12.248 "seek_data": false, 00:26:12.248 "copy": true, 00:26:12.248 "nvme_iov_md": false 00:26:12.248 }, 00:26:12.248 "memory_domains": [ 00:26:12.248 { 00:26:12.248 "dma_device_id": "system", 00:26:12.248 "dma_device_type": 1 00:26:12.248 }, 00:26:12.248 { 00:26:12.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.248 "dma_device_type": 2 00:26:12.248 } 00:26:12.248 ], 00:26:12.248 "driver_specific": {} 00:26:12.248 } 00:26:12.248 ] 00:26:12.248 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:26:12.248 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:12.248 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:12.248 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:12.248 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:12.248 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:12.248 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:12.248 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:12.248 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:12.248 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:12.248 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:12.248 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:12.248 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.508 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:12.508 "name": "Existed_Raid", 00:26:12.508 "uuid": "72333d89-32fb-4737-a95c-1096f5eecc79", 00:26:12.508 "strip_size_kb": 0, 00:26:12.508 "state": "configuring", 00:26:12.508 "raid_level": "raid1", 00:26:12.508 "superblock": true, 00:26:12.508 "num_base_bdevs": 2, 00:26:12.508 "num_base_bdevs_discovered": 1, 00:26:12.508 "num_base_bdevs_operational": 2, 00:26:12.508 "base_bdevs_list": [ 00:26:12.508 { 00:26:12.508 "name": "BaseBdev1", 00:26:12.508 "uuid": "350b161f-f34f-45e0-a326-97cef6e37ad0", 00:26:12.508 "is_configured": true, 00:26:12.508 "data_offset": 256, 00:26:12.508 "data_size": 7936 00:26:12.508 }, 00:26:12.508 { 00:26:12.508 "name": "BaseBdev2", 00:26:12.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.508 "is_configured": false, 00:26:12.508 "data_offset": 0, 00:26:12.508 "data_size": 0 00:26:12.508 } 00:26:12.508 ] 00:26:12.508 }' 00:26:12.508 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:12.508 06:57:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:13.078 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:13.336 [2024-08-14 06:57:40.463368] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:13.336 [2024-08-14 06:57:40.463504] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:26:13.336 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:13.595 [2024-08-14 06:57:40.711013] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:13.595 [2024-08-14 06:57:40.713173] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:13.595 [2024-08-14 06:57:40.713225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:13.595 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:13.595 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:13.595 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:13.595 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:13.595 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:13.595 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:13.595 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:13.595 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:13.595 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:13.595 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:13.595 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:13.595 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:13.595 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.595 06:57:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.854 06:57:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:13.854 "name": "Existed_Raid", 00:26:13.854 "uuid": "1f74b006-08c1-49be-ba63-e2ae7da0d5a9", 00:26:13.854 "strip_size_kb": 0, 00:26:13.854 "state": "configuring", 00:26:13.854 "raid_level": "raid1", 00:26:13.854 "superblock": true, 00:26:13.854 "num_base_bdevs": 2, 00:26:13.854 "num_base_bdevs_discovered": 1, 00:26:13.854 "num_base_bdevs_operational": 2, 00:26:13.854 "base_bdevs_list": [ 00:26:13.854 { 00:26:13.854 "name": "BaseBdev1", 00:26:13.854 "uuid": "350b161f-f34f-45e0-a326-97cef6e37ad0", 00:26:13.854 "is_configured": true, 00:26:13.854 "data_offset": 256, 00:26:13.854 "data_size": 7936 00:26:13.854 }, 00:26:13.854 { 00:26:13.854 "name": "BaseBdev2", 00:26:13.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.854 "is_configured": false, 00:26:13.854 "data_offset": 0, 00:26:13.854 "data_size": 0 00:26:13.854 } 00:26:13.854 ] 00:26:13.854 }' 00:26:13.854 06:57:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:13.854 06:57:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:14.421 06:57:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:26:14.680 [2024-08-14 06:57:41.806796] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:14.680 [2024-08-14 06:57:41.807017] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:26:14.680 [2024-08-14 06:57:41.807040] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:14.680 [2024-08-14 06:57:41.807357] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:26:14.680 [2024-08-14 06:57:41.807539] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:26:14.681 [2024-08-14 06:57:41.807551] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:26:14.681 BaseBdev2 00:26:14.681 [2024-08-14 06:57:41.807712] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:14.681 06:57:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:14.681 06:57:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:26:14.681 06:57:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:14.681 06:57:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:26:14.681 06:57:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:14.681 06:57:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:14.681 06:57:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:14.958 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:15.254 [ 00:26:15.254 { 00:26:15.254 "name": "BaseBdev2", 00:26:15.254 "aliases": [ 00:26:15.254 "392809f1-5832-4849-806f-ce9f40a8b3cd" 00:26:15.254 ], 00:26:15.254 "product_name": "Malloc disk", 00:26:15.254 "block_size": 4096, 00:26:15.254 "num_blocks": 8192, 00:26:15.254 "uuid": "392809f1-5832-4849-806f-ce9f40a8b3cd", 00:26:15.254 "assigned_rate_limits": { 00:26:15.254 "rw_ios_per_sec": 0, 00:26:15.254 "rw_mbytes_per_sec": 0, 00:26:15.254 "r_mbytes_per_sec": 0, 00:26:15.254 "w_mbytes_per_sec": 0 00:26:15.254 }, 00:26:15.254 "claimed": true, 00:26:15.254 "claim_type": "exclusive_write", 00:26:15.254 "zoned": false, 00:26:15.254 "supported_io_types": { 00:26:15.254 "read": true, 00:26:15.254 "write": true, 00:26:15.254 "unmap": true, 00:26:15.254 "flush": true, 00:26:15.254 "reset": true, 00:26:15.254 "nvme_admin": false, 00:26:15.254 "nvme_io": false, 00:26:15.254 "nvme_io_md": false, 00:26:15.254 "write_zeroes": true, 00:26:15.254 "zcopy": true, 00:26:15.254 "get_zone_info": false, 00:26:15.254 "zone_management": false, 00:26:15.254 "zone_append": false, 00:26:15.254 "compare": false, 00:26:15.254 "compare_and_write": false, 00:26:15.254 "abort": true, 00:26:15.254 "seek_hole": false, 00:26:15.254 "seek_data": false, 00:26:15.254 "copy": true, 00:26:15.254 "nvme_iov_md": false 00:26:15.254 }, 00:26:15.254 "memory_domains": [ 00:26:15.254 { 00:26:15.254 "dma_device_id": "system", 00:26:15.254 "dma_device_type": 1 00:26:15.254 }, 00:26:15.254 { 00:26:15.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.254 "dma_device_type": 2 00:26:15.254 } 00:26:15.254 ], 00:26:15.254 "driver_specific": {} 00:26:15.254 } 00:26:15.254 ] 00:26:15.254 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:26:15.254 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:15.254 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:15.254 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:26:15.254 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:15.254 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:15.254 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:15.254 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:15.254 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:15.254 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:15.254 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:15.254 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:15.254 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:15.254 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.254 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.513 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:15.513 "name": "Existed_Raid", 00:26:15.513 "uuid": "1f74b006-08c1-49be-ba63-e2ae7da0d5a9", 00:26:15.513 "strip_size_kb": 0, 00:26:15.513 "state": "online", 00:26:15.513 "raid_level": "raid1", 00:26:15.513 "superblock": true, 00:26:15.513 "num_base_bdevs": 2, 00:26:15.513 "num_base_bdevs_discovered": 2, 00:26:15.513 "num_base_bdevs_operational": 2, 00:26:15.513 "base_bdevs_list": [ 00:26:15.513 { 00:26:15.513 "name": "BaseBdev1", 00:26:15.513 "uuid": "350b161f-f34f-45e0-a326-97cef6e37ad0", 00:26:15.513 "is_configured": true, 00:26:15.513 "data_offset": 256, 00:26:15.513 "data_size": 7936 00:26:15.513 }, 00:26:15.513 { 00:26:15.513 "name": "BaseBdev2", 00:26:15.513 "uuid": "392809f1-5832-4849-806f-ce9f40a8b3cd", 00:26:15.513 "is_configured": true, 00:26:15.513 "data_offset": 256, 00:26:15.513 "data_size": 7936 00:26:15.513 } 00:26:15.513 ] 00:26:15.513 }' 00:26:15.513 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:15.513 06:57:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:16.082 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:16.082 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:16.082 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:16.082 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:16.082 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:16.082 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:26:16.082 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:16.082 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:16.082 [2024-08-14 06:57:43.268697] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:16.082 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:16.082 "name": "Existed_Raid", 00:26:16.082 "aliases": [ 00:26:16.082 "1f74b006-08c1-49be-ba63-e2ae7da0d5a9" 00:26:16.082 ], 00:26:16.082 "product_name": "Raid Volume", 00:26:16.082 "block_size": 4096, 00:26:16.082 "num_blocks": 7936, 00:26:16.082 "uuid": "1f74b006-08c1-49be-ba63-e2ae7da0d5a9", 00:26:16.082 "assigned_rate_limits": { 00:26:16.082 "rw_ios_per_sec": 0, 00:26:16.082 "rw_mbytes_per_sec": 0, 00:26:16.082 "r_mbytes_per_sec": 0, 00:26:16.082 "w_mbytes_per_sec": 0 00:26:16.082 }, 00:26:16.082 "claimed": false, 00:26:16.082 "zoned": false, 00:26:16.082 "supported_io_types": { 00:26:16.082 "read": true, 00:26:16.082 "write": true, 00:26:16.082 "unmap": false, 00:26:16.082 "flush": false, 00:26:16.082 "reset": true, 00:26:16.082 "nvme_admin": false, 00:26:16.082 "nvme_io": false, 00:26:16.082 "nvme_io_md": false, 00:26:16.082 "write_zeroes": true, 00:26:16.082 "zcopy": false, 00:26:16.082 "get_zone_info": false, 00:26:16.082 "zone_management": false, 00:26:16.082 "zone_append": false, 00:26:16.082 "compare": false, 00:26:16.082 "compare_and_write": false, 00:26:16.082 "abort": false, 00:26:16.082 "seek_hole": false, 00:26:16.082 "seek_data": false, 00:26:16.082 "copy": false, 00:26:16.082 "nvme_iov_md": false 00:26:16.082 }, 00:26:16.082 "memory_domains": [ 00:26:16.082 { 00:26:16.082 "dma_device_id": "system", 00:26:16.082 "dma_device_type": 1 00:26:16.082 }, 00:26:16.082 { 00:26:16.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.082 "dma_device_type": 2 00:26:16.082 }, 00:26:16.082 { 00:26:16.082 "dma_device_id": "system", 00:26:16.082 "dma_device_type": 1 00:26:16.082 }, 00:26:16.082 { 00:26:16.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.082 "dma_device_type": 2 00:26:16.082 } 00:26:16.082 ], 00:26:16.082 "driver_specific": { 00:26:16.082 "raid": { 00:26:16.082 "uuid": "1f74b006-08c1-49be-ba63-e2ae7da0d5a9", 00:26:16.082 "strip_size_kb": 0, 00:26:16.082 "state": "online", 00:26:16.082 "raid_level": "raid1", 00:26:16.082 "superblock": true, 00:26:16.082 "num_base_bdevs": 2, 00:26:16.082 "num_base_bdevs_discovered": 2, 00:26:16.082 "num_base_bdevs_operational": 2, 00:26:16.082 "base_bdevs_list": [ 00:26:16.082 { 00:26:16.082 "name": "BaseBdev1", 00:26:16.082 "uuid": "350b161f-f34f-45e0-a326-97cef6e37ad0", 00:26:16.082 "is_configured": true, 00:26:16.082 "data_offset": 256, 00:26:16.082 "data_size": 7936 00:26:16.082 }, 00:26:16.082 { 00:26:16.082 "name": "BaseBdev2", 00:26:16.082 "uuid": "392809f1-5832-4849-806f-ce9f40a8b3cd", 00:26:16.082 "is_configured": true, 00:26:16.082 "data_offset": 256, 00:26:16.082 "data_size": 7936 00:26:16.082 } 00:26:16.082 ] 00:26:16.082 } 00:26:16.082 } 00:26:16.082 }' 00:26:16.082 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:16.341 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:16.341 BaseBdev2' 00:26:16.341 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:16.341 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:16.341 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:16.341 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:16.341 "name": "BaseBdev1", 00:26:16.341 "aliases": [ 00:26:16.341 "350b161f-f34f-45e0-a326-97cef6e37ad0" 00:26:16.341 ], 00:26:16.341 "product_name": "Malloc disk", 00:26:16.341 "block_size": 4096, 00:26:16.341 "num_blocks": 8192, 00:26:16.341 "uuid": "350b161f-f34f-45e0-a326-97cef6e37ad0", 00:26:16.341 "assigned_rate_limits": { 00:26:16.341 "rw_ios_per_sec": 0, 00:26:16.341 "rw_mbytes_per_sec": 0, 00:26:16.341 "r_mbytes_per_sec": 0, 00:26:16.341 "w_mbytes_per_sec": 0 00:26:16.341 }, 00:26:16.341 "claimed": true, 00:26:16.341 "claim_type": "exclusive_write", 00:26:16.341 "zoned": false, 00:26:16.341 "supported_io_types": { 00:26:16.341 "read": true, 00:26:16.341 "write": true, 00:26:16.341 "unmap": true, 00:26:16.341 "flush": true, 00:26:16.341 "reset": true, 00:26:16.341 "nvme_admin": false, 00:26:16.341 "nvme_io": false, 00:26:16.341 "nvme_io_md": false, 00:26:16.341 "write_zeroes": true, 00:26:16.341 "zcopy": true, 00:26:16.341 "get_zone_info": false, 00:26:16.341 "zone_management": false, 00:26:16.341 "zone_append": false, 00:26:16.341 "compare": false, 00:26:16.341 "compare_and_write": false, 00:26:16.341 "abort": true, 00:26:16.341 "seek_hole": false, 00:26:16.341 "seek_data": false, 00:26:16.341 "copy": true, 00:26:16.341 "nvme_iov_md": false 00:26:16.341 }, 00:26:16.341 "memory_domains": [ 00:26:16.341 { 00:26:16.341 "dma_device_id": "system", 00:26:16.341 "dma_device_type": 1 00:26:16.341 }, 00:26:16.341 { 00:26:16.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.341 "dma_device_type": 2 00:26:16.341 } 00:26:16.341 ], 00:26:16.341 "driver_specific": {} 00:26:16.341 }' 00:26:16.341 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:16.341 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:16.600 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:16.600 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:16.600 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:16.600 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:16.600 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:16.600 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:16.600 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:16.600 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:16.600 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:16.860 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:16.860 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:16.860 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:16.860 06:57:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:16.860 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:16.860 "name": "BaseBdev2", 00:26:16.860 "aliases": [ 00:26:16.860 "392809f1-5832-4849-806f-ce9f40a8b3cd" 00:26:16.860 ], 00:26:16.860 "product_name": "Malloc disk", 00:26:16.860 "block_size": 4096, 00:26:16.860 "num_blocks": 8192, 00:26:16.860 "uuid": "392809f1-5832-4849-806f-ce9f40a8b3cd", 00:26:16.860 "assigned_rate_limits": { 00:26:16.860 "rw_ios_per_sec": 0, 00:26:16.860 "rw_mbytes_per_sec": 0, 00:26:16.860 "r_mbytes_per_sec": 0, 00:26:16.860 "w_mbytes_per_sec": 0 00:26:16.860 }, 00:26:16.860 "claimed": true, 00:26:16.860 "claim_type": "exclusive_write", 00:26:16.860 "zoned": false, 00:26:16.860 "supported_io_types": { 00:26:16.860 "read": true, 00:26:16.860 "write": true, 00:26:16.860 "unmap": true, 00:26:16.860 "flush": true, 00:26:16.860 "reset": true, 00:26:16.860 "nvme_admin": false, 00:26:16.860 "nvme_io": false, 00:26:16.860 "nvme_io_md": false, 00:26:16.860 "write_zeroes": true, 00:26:16.860 "zcopy": true, 00:26:16.860 "get_zone_info": false, 00:26:16.860 "zone_management": false, 00:26:16.860 "zone_append": false, 00:26:16.860 "compare": false, 00:26:16.860 "compare_and_write": false, 00:26:16.860 "abort": true, 00:26:16.860 "seek_hole": false, 00:26:16.860 "seek_data": false, 00:26:16.860 "copy": true, 00:26:16.860 "nvme_iov_md": false 00:26:16.860 }, 00:26:16.860 "memory_domains": [ 00:26:16.860 { 00:26:16.860 "dma_device_id": "system", 00:26:16.860 "dma_device_type": 1 00:26:16.860 }, 00:26:16.860 { 00:26:16.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.860 "dma_device_type": 2 00:26:16.860 } 00:26:16.860 ], 00:26:16.860 "driver_specific": {} 00:26:16.860 }' 00:26:16.860 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.119 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.119 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:17.119 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.119 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.119 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:17.119 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.119 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.119 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:17.119 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.378 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.378 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:17.378 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:17.378 [2024-08-14 06:57:44.622300] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:17.636 "name": "Existed_Raid", 00:26:17.636 "uuid": "1f74b006-08c1-49be-ba63-e2ae7da0d5a9", 00:26:17.636 "strip_size_kb": 0, 00:26:17.636 "state": "online", 00:26:17.636 "raid_level": "raid1", 00:26:17.636 "superblock": true, 00:26:17.636 "num_base_bdevs": 2, 00:26:17.636 "num_base_bdevs_discovered": 1, 00:26:17.636 "num_base_bdevs_operational": 1, 00:26:17.636 "base_bdevs_list": [ 00:26:17.636 { 00:26:17.636 "name": null, 00:26:17.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.636 "is_configured": false, 00:26:17.636 "data_offset": 256, 00:26:17.636 "data_size": 7936 00:26:17.636 }, 00:26:17.636 { 00:26:17.636 "name": "BaseBdev2", 00:26:17.636 "uuid": "392809f1-5832-4849-806f-ce9f40a8b3cd", 00:26:17.636 "is_configured": true, 00:26:17.636 "data_offset": 256, 00:26:17.636 "data_size": 7936 00:26:17.636 } 00:26:17.636 ] 00:26:17.636 }' 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:17.636 06:57:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:18.204 06:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:18.205 06:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:18.205 06:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.205 06:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:18.464 06:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:18.464 06:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:18.464 06:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:18.722 [2024-08-14 06:57:45.839723] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:18.722 [2024-08-14 06:57:45.839942] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:18.722 [2024-08-14 06:57:45.852013] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:18.722 [2024-08-14 06:57:45.852072] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:18.722 [2024-08-14 06:57:45.852084] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:26:18.722 06:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:18.723 06:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:18.723 06:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:18.723 06:57:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.982 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:18.982 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:18.982 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:26:18.982 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 106230 00:26:18.982 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@946 -- # '[' -z 106230 ']' 00:26:18.982 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # kill -0 106230 00:26:18.982 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # uname 00:26:18.982 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:18.982 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106230 00:26:18.982 killing process with pid 106230 00:26:18.982 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:18.982 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:18.982 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106230' 00:26:18.982 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@965 -- # kill 106230 00:26:18.982 [2024-08-14 06:57:46.120010] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:18.982 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # wait 106230 00:26:18.982 [2024-08-14 06:57:46.121074] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:19.241 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:26:19.241 00:26:19.241 real 0m10.125s 00:26:19.241 user 0m18.258s 00:26:19.241 sys 0m1.551s 00:26:19.241 ************************************ 00:26:19.241 END TEST raid_state_function_test_sb_4k 00:26:19.241 ************************************ 00:26:19.241 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:19.241 06:57:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:19.241 06:57:46 bdev_raid -- bdev/bdev_raid.sh@977 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:26:19.241 06:57:46 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:26:19.241 06:57:46 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:19.241 06:57:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:19.241 ************************************ 00:26:19.241 START TEST raid_superblock_test_4k 00:26:19.241 ************************************ 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@414 -- # local strip_size 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@427 -- # raid_pid=106568 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@428 -- # waitforlisten 106568 /var/tmp/spdk-raid.sock 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@827 -- # '[' -z 106568 ']' 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:19.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:19.241 06:57:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:19.241 [2024-08-14 06:57:46.495179] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:26:19.502 [2024-08-14 06:57:46.495841] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106568 ] 00:26:19.502 [2024-08-14 06:57:46.644845] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.502 [2024-08-14 06:57:46.690847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.502 [2024-08-14 06:57:46.733477] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:19.502 [2024-08-14 06:57:46.733597] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:20.440 06:57:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:20.440 06:57:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # return 0 00:26:20.440 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:26:20.440 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:20.440 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:26:20.440 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:26:20.440 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:20.440 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:20.440 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:20.440 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:20.440 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:26:20.440 malloc1 00:26:20.440 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:20.699 [2024-08-14 06:57:47.750408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:20.699 [2024-08-14 06:57:47.751055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:20.699 [2024-08-14 06:57:47.751213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:26:20.699 [2024-08-14 06:57:47.751373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:20.699 [2024-08-14 06:57:47.754339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:20.699 [2024-08-14 06:57:47.754531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:20.699 pt1 00:26:20.699 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:20.699 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:20.699 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:26:20.699 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:26:20.699 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:20.699 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:20.699 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:20.699 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:20.699 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:26:20.957 malloc2 00:26:20.957 06:57:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:20.957 [2024-08-14 06:57:48.179500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:20.957 [2024-08-14 06:57:48.179807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:20.957 [2024-08-14 06:57:48.179921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:20.957 [2024-08-14 06:57:48.180010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:20.958 [2024-08-14 06:57:48.182204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:20.958 [2024-08-14 06:57:48.182349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:20.958 pt2 00:26:20.958 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:20.958 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:20.958 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:26:21.217 [2024-08-14 06:57:48.375262] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:21.217 [2024-08-14 06:57:48.377373] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:21.217 [2024-08-14 06:57:48.377591] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:26:21.217 [2024-08-14 06:57:48.377648] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:21.217 [2024-08-14 06:57:48.378014] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:26:21.217 [2024-08-14 06:57:48.378254] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:26:21.217 [2024-08-14 06:57:48.378307] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:26:21.217 [2024-08-14 06:57:48.378527] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:21.217 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:21.217 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:21.217 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:21.217 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:21.217 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:21.217 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:21.217 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:21.217 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:21.217 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:21.217 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:21.217 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.217 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.477 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:21.477 "name": "raid_bdev1", 00:26:21.477 "uuid": "34d66c5d-e459-4cfe-bd64-c6989b363af7", 00:26:21.477 "strip_size_kb": 0, 00:26:21.477 "state": "online", 00:26:21.477 "raid_level": "raid1", 00:26:21.477 "superblock": true, 00:26:21.477 "num_base_bdevs": 2, 00:26:21.477 "num_base_bdevs_discovered": 2, 00:26:21.477 "num_base_bdevs_operational": 2, 00:26:21.477 "base_bdevs_list": [ 00:26:21.477 { 00:26:21.477 "name": "pt1", 00:26:21.477 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:21.477 "is_configured": true, 00:26:21.477 "data_offset": 256, 00:26:21.477 "data_size": 7936 00:26:21.477 }, 00:26:21.477 { 00:26:21.477 "name": "pt2", 00:26:21.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:21.477 "is_configured": true, 00:26:21.477 "data_offset": 256, 00:26:21.477 "data_size": 7936 00:26:21.477 } 00:26:21.477 ] 00:26:21.477 }' 00:26:21.477 06:57:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:21.477 06:57:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:22.044 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:26:22.044 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:22.044 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:22.044 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:22.044 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:22.044 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:26:22.044 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:22.044 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:22.303 [2024-08-14 06:57:49.449910] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:22.303 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:22.303 "name": "raid_bdev1", 00:26:22.303 "aliases": [ 00:26:22.303 "34d66c5d-e459-4cfe-bd64-c6989b363af7" 00:26:22.303 ], 00:26:22.303 "product_name": "Raid Volume", 00:26:22.303 "block_size": 4096, 00:26:22.303 "num_blocks": 7936, 00:26:22.303 "uuid": "34d66c5d-e459-4cfe-bd64-c6989b363af7", 00:26:22.303 "assigned_rate_limits": { 00:26:22.303 "rw_ios_per_sec": 0, 00:26:22.303 "rw_mbytes_per_sec": 0, 00:26:22.303 "r_mbytes_per_sec": 0, 00:26:22.303 "w_mbytes_per_sec": 0 00:26:22.303 }, 00:26:22.303 "claimed": false, 00:26:22.303 "zoned": false, 00:26:22.303 "supported_io_types": { 00:26:22.303 "read": true, 00:26:22.303 "write": true, 00:26:22.303 "unmap": false, 00:26:22.303 "flush": false, 00:26:22.303 "reset": true, 00:26:22.303 "nvme_admin": false, 00:26:22.303 "nvme_io": false, 00:26:22.303 "nvme_io_md": false, 00:26:22.303 "write_zeroes": true, 00:26:22.303 "zcopy": false, 00:26:22.303 "get_zone_info": false, 00:26:22.303 "zone_management": false, 00:26:22.303 "zone_append": false, 00:26:22.303 "compare": false, 00:26:22.303 "compare_and_write": false, 00:26:22.303 "abort": false, 00:26:22.303 "seek_hole": false, 00:26:22.303 "seek_data": false, 00:26:22.303 "copy": false, 00:26:22.303 "nvme_iov_md": false 00:26:22.303 }, 00:26:22.303 "memory_domains": [ 00:26:22.303 { 00:26:22.303 "dma_device_id": "system", 00:26:22.303 "dma_device_type": 1 00:26:22.303 }, 00:26:22.303 { 00:26:22.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.303 "dma_device_type": 2 00:26:22.303 }, 00:26:22.303 { 00:26:22.303 "dma_device_id": "system", 00:26:22.303 "dma_device_type": 1 00:26:22.303 }, 00:26:22.303 { 00:26:22.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.303 "dma_device_type": 2 00:26:22.303 } 00:26:22.303 ], 00:26:22.303 "driver_specific": { 00:26:22.303 "raid": { 00:26:22.303 "uuid": "34d66c5d-e459-4cfe-bd64-c6989b363af7", 00:26:22.303 "strip_size_kb": 0, 00:26:22.303 "state": "online", 00:26:22.303 "raid_level": "raid1", 00:26:22.303 "superblock": true, 00:26:22.303 "num_base_bdevs": 2, 00:26:22.303 "num_base_bdevs_discovered": 2, 00:26:22.303 "num_base_bdevs_operational": 2, 00:26:22.303 "base_bdevs_list": [ 00:26:22.303 { 00:26:22.303 "name": "pt1", 00:26:22.303 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:22.303 "is_configured": true, 00:26:22.303 "data_offset": 256, 00:26:22.303 "data_size": 7936 00:26:22.303 }, 00:26:22.303 { 00:26:22.303 "name": "pt2", 00:26:22.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:22.303 "is_configured": true, 00:26:22.303 "data_offset": 256, 00:26:22.303 "data_size": 7936 00:26:22.303 } 00:26:22.303 ] 00:26:22.303 } 00:26:22.303 } 00:26:22.303 }' 00:26:22.303 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:22.303 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:22.303 pt2' 00:26:22.303 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:22.303 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:22.303 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:22.562 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:22.562 "name": "pt1", 00:26:22.562 "aliases": [ 00:26:22.562 "00000000-0000-0000-0000-000000000001" 00:26:22.562 ], 00:26:22.562 "product_name": "passthru", 00:26:22.562 "block_size": 4096, 00:26:22.562 "num_blocks": 8192, 00:26:22.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:22.562 "assigned_rate_limits": { 00:26:22.562 "rw_ios_per_sec": 0, 00:26:22.562 "rw_mbytes_per_sec": 0, 00:26:22.562 "r_mbytes_per_sec": 0, 00:26:22.562 "w_mbytes_per_sec": 0 00:26:22.562 }, 00:26:22.562 "claimed": true, 00:26:22.562 "claim_type": "exclusive_write", 00:26:22.562 "zoned": false, 00:26:22.562 "supported_io_types": { 00:26:22.562 "read": true, 00:26:22.562 "write": true, 00:26:22.562 "unmap": true, 00:26:22.562 "flush": true, 00:26:22.562 "reset": true, 00:26:22.562 "nvme_admin": false, 00:26:22.562 "nvme_io": false, 00:26:22.562 "nvme_io_md": false, 00:26:22.562 "write_zeroes": true, 00:26:22.562 "zcopy": true, 00:26:22.562 "get_zone_info": false, 00:26:22.562 "zone_management": false, 00:26:22.562 "zone_append": false, 00:26:22.562 "compare": false, 00:26:22.562 "compare_and_write": false, 00:26:22.562 "abort": true, 00:26:22.562 "seek_hole": false, 00:26:22.562 "seek_data": false, 00:26:22.562 "copy": true, 00:26:22.562 "nvme_iov_md": false 00:26:22.562 }, 00:26:22.562 "memory_domains": [ 00:26:22.562 { 00:26:22.562 "dma_device_id": "system", 00:26:22.562 "dma_device_type": 1 00:26:22.562 }, 00:26:22.562 { 00:26:22.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.562 "dma_device_type": 2 00:26:22.562 } 00:26:22.562 ], 00:26:22.562 "driver_specific": { 00:26:22.562 "passthru": { 00:26:22.562 "name": "pt1", 00:26:22.562 "base_bdev_name": "malloc1" 00:26:22.562 } 00:26:22.562 } 00:26:22.562 }' 00:26:22.562 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:22.562 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:22.821 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:22.821 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:22.821 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:22.821 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:22.821 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:22.821 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:22.821 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:22.821 06:57:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:22.821 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:23.080 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:23.080 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:23.080 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:23.080 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:23.080 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:23.080 "name": "pt2", 00:26:23.080 "aliases": [ 00:26:23.080 "00000000-0000-0000-0000-000000000002" 00:26:23.080 ], 00:26:23.080 "product_name": "passthru", 00:26:23.080 "block_size": 4096, 00:26:23.080 "num_blocks": 8192, 00:26:23.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:23.080 "assigned_rate_limits": { 00:26:23.080 "rw_ios_per_sec": 0, 00:26:23.080 "rw_mbytes_per_sec": 0, 00:26:23.080 "r_mbytes_per_sec": 0, 00:26:23.080 "w_mbytes_per_sec": 0 00:26:23.080 }, 00:26:23.080 "claimed": true, 00:26:23.080 "claim_type": "exclusive_write", 00:26:23.080 "zoned": false, 00:26:23.080 "supported_io_types": { 00:26:23.080 "read": true, 00:26:23.080 "write": true, 00:26:23.080 "unmap": true, 00:26:23.080 "flush": true, 00:26:23.080 "reset": true, 00:26:23.080 "nvme_admin": false, 00:26:23.080 "nvme_io": false, 00:26:23.080 "nvme_io_md": false, 00:26:23.080 "write_zeroes": true, 00:26:23.080 "zcopy": true, 00:26:23.080 "get_zone_info": false, 00:26:23.080 "zone_management": false, 00:26:23.080 "zone_append": false, 00:26:23.081 "compare": false, 00:26:23.081 "compare_and_write": false, 00:26:23.081 "abort": true, 00:26:23.081 "seek_hole": false, 00:26:23.081 "seek_data": false, 00:26:23.081 "copy": true, 00:26:23.081 "nvme_iov_md": false 00:26:23.081 }, 00:26:23.081 "memory_domains": [ 00:26:23.081 { 00:26:23.081 "dma_device_id": "system", 00:26:23.081 "dma_device_type": 1 00:26:23.081 }, 00:26:23.081 { 00:26:23.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.081 "dma_device_type": 2 00:26:23.081 } 00:26:23.081 ], 00:26:23.081 "driver_specific": { 00:26:23.081 "passthru": { 00:26:23.081 "name": "pt2", 00:26:23.081 "base_bdev_name": "malloc2" 00:26:23.081 } 00:26:23.081 } 00:26:23.081 }' 00:26:23.081 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:23.340 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:23.340 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:23.340 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:23.340 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:23.340 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:23.340 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:23.340 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:23.599 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:23.599 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:23.599 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:23.599 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:23.599 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:23.599 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:26:23.858 [2024-08-14 06:57:50.879404] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:23.858 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=34d66c5d-e459-4cfe-bd64-c6989b363af7 00:26:23.858 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' -z 34d66c5d-e459-4cfe-bd64-c6989b363af7 ']' 00:26:23.858 06:57:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:23.858 [2024-08-14 06:57:51.102798] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:23.858 [2024-08-14 06:57:51.102854] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:23.858 [2024-08-14 06:57:51.102966] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:23.858 [2024-08-14 06:57:51.103034] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:23.858 [2024-08-14 06:57:51.103049] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:26:24.117 06:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.117 06:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:26:24.117 06:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:26:24.117 06:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:26:24.117 06:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:24.117 06:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:24.376 06:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:24.376 06:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:24.635 06:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:24.635 06:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:24.894 06:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:26:24.894 06:57:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:24.894 06:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@646 -- # local es=0 00:26:24.894 06:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:24.894 06:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:24.894 06:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:26:24.894 06:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:24.894 06:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:26:24.894 06:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:24.894 06:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:26:24.894 06:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:24.894 06:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:24.894 06:57:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:25.153 [2024-08-14 06:57:52.149110] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:25.153 [2024-08-14 06:57:52.150984] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:25.153 [2024-08-14 06:57:52.151057] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:25.153 [2024-08-14 06:57:52.151642] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:25.153 [2024-08-14 06:57:52.151769] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:25.153 [2024-08-14 06:57:52.151814] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:26:25.153 request: 00:26:25.153 { 00:26:25.153 "name": "raid_bdev1", 00:26:25.153 "raid_level": "raid1", 00:26:25.153 "base_bdevs": [ 00:26:25.153 "malloc1", 00:26:25.153 "malloc2" 00:26:25.153 ], 00:26:25.153 "superblock": false, 00:26:25.153 "method": "bdev_raid_create", 00:26:25.153 "req_id": 1 00:26:25.153 } 00:26:25.153 Got JSON-RPC error response 00:26:25.153 response: 00:26:25.153 { 00:26:25.153 "code": -17, 00:26:25.153 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:25.153 } 00:26:25.153 06:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@649 -- # es=1 00:26:25.153 06:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:26:25.153 06:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:26:25.153 06:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:26:25.153 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.153 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:26:25.153 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:26:25.153 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:26:25.153 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:25.412 [2024-08-14 06:57:52.556339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:25.412 [2024-08-14 06:57:52.556680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:25.412 [2024-08-14 06:57:52.556709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:25.412 [2024-08-14 06:57:52.556722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:25.412 [2024-08-14 06:57:52.559020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:25.412 [2024-08-14 06:57:52.559109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:25.412 [2024-08-14 06:57:52.559211] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:25.412 [2024-08-14 06:57:52.559268] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:25.412 pt1 00:26:25.412 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:26:25.412 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:25.412 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:25.412 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:25.412 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:25.412 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:25.412 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:25.412 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:25.412 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:25.412 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:25.412 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.412 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.671 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:25.671 "name": "raid_bdev1", 00:26:25.671 "uuid": "34d66c5d-e459-4cfe-bd64-c6989b363af7", 00:26:25.671 "strip_size_kb": 0, 00:26:25.671 "state": "configuring", 00:26:25.671 "raid_level": "raid1", 00:26:25.671 "superblock": true, 00:26:25.671 "num_base_bdevs": 2, 00:26:25.671 "num_base_bdevs_discovered": 1, 00:26:25.671 "num_base_bdevs_operational": 2, 00:26:25.671 "base_bdevs_list": [ 00:26:25.671 { 00:26:25.671 "name": "pt1", 00:26:25.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:25.671 "is_configured": true, 00:26:25.671 "data_offset": 256, 00:26:25.671 "data_size": 7936 00:26:25.671 }, 00:26:25.671 { 00:26:25.671 "name": null, 00:26:25.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:25.671 "is_configured": false, 00:26:25.671 "data_offset": 256, 00:26:25.671 "data_size": 7936 00:26:25.671 } 00:26:25.671 ] 00:26:25.671 }' 00:26:25.671 06:57:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:25.671 06:57:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:26.240 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:26:26.240 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:26:26.240 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:26.240 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:26.500 [2024-08-14 06:57:53.506758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:26.500 [2024-08-14 06:57:53.506920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:26.500 [2024-08-14 06:57:53.506958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:26.500 [2024-08-14 06:57:53.506988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:26.500 [2024-08-14 06:57:53.507419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:26.500 [2024-08-14 06:57:53.507485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:26.500 [2024-08-14 06:57:53.507591] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:26.500 [2024-08-14 06:57:53.507643] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:26.500 [2024-08-14 06:57:53.507814] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:26:26.500 [2024-08-14 06:57:53.507853] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:26.500 [2024-08-14 06:57:53.508098] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:26:26.500 [2024-08-14 06:57:53.508255] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:26:26.500 [2024-08-14 06:57:53.508293] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:26:26.500 [2024-08-14 06:57:53.508428] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:26.500 pt2 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:26.500 "name": "raid_bdev1", 00:26:26.500 "uuid": "34d66c5d-e459-4cfe-bd64-c6989b363af7", 00:26:26.500 "strip_size_kb": 0, 00:26:26.500 "state": "online", 00:26:26.500 "raid_level": "raid1", 00:26:26.500 "superblock": true, 00:26:26.500 "num_base_bdevs": 2, 00:26:26.500 "num_base_bdevs_discovered": 2, 00:26:26.500 "num_base_bdevs_operational": 2, 00:26:26.500 "base_bdevs_list": [ 00:26:26.500 { 00:26:26.500 "name": "pt1", 00:26:26.500 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:26.500 "is_configured": true, 00:26:26.500 "data_offset": 256, 00:26:26.500 "data_size": 7936 00:26:26.500 }, 00:26:26.500 { 00:26:26.500 "name": "pt2", 00:26:26.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:26.500 "is_configured": true, 00:26:26.500 "data_offset": 256, 00:26:26.500 "data_size": 7936 00:26:26.500 } 00:26:26.500 ] 00:26:26.500 }' 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:26.500 06:57:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:27.068 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:26:27.068 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:27.068 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:27.068 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:27.068 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:27.068 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:26:27.068 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:27.068 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:27.327 [2024-08-14 06:57:54.501496] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:27.327 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:27.327 "name": "raid_bdev1", 00:26:27.327 "aliases": [ 00:26:27.327 "34d66c5d-e459-4cfe-bd64-c6989b363af7" 00:26:27.327 ], 00:26:27.327 "product_name": "Raid Volume", 00:26:27.327 "block_size": 4096, 00:26:27.327 "num_blocks": 7936, 00:26:27.327 "uuid": "34d66c5d-e459-4cfe-bd64-c6989b363af7", 00:26:27.327 "assigned_rate_limits": { 00:26:27.327 "rw_ios_per_sec": 0, 00:26:27.327 "rw_mbytes_per_sec": 0, 00:26:27.327 "r_mbytes_per_sec": 0, 00:26:27.327 "w_mbytes_per_sec": 0 00:26:27.327 }, 00:26:27.327 "claimed": false, 00:26:27.327 "zoned": false, 00:26:27.327 "supported_io_types": { 00:26:27.327 "read": true, 00:26:27.327 "write": true, 00:26:27.327 "unmap": false, 00:26:27.327 "flush": false, 00:26:27.327 "reset": true, 00:26:27.327 "nvme_admin": false, 00:26:27.327 "nvme_io": false, 00:26:27.327 "nvme_io_md": false, 00:26:27.327 "write_zeroes": true, 00:26:27.327 "zcopy": false, 00:26:27.327 "get_zone_info": false, 00:26:27.327 "zone_management": false, 00:26:27.327 "zone_append": false, 00:26:27.327 "compare": false, 00:26:27.327 "compare_and_write": false, 00:26:27.327 "abort": false, 00:26:27.327 "seek_hole": false, 00:26:27.327 "seek_data": false, 00:26:27.327 "copy": false, 00:26:27.327 "nvme_iov_md": false 00:26:27.327 }, 00:26:27.327 "memory_domains": [ 00:26:27.327 { 00:26:27.327 "dma_device_id": "system", 00:26:27.327 "dma_device_type": 1 00:26:27.327 }, 00:26:27.327 { 00:26:27.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:27.327 "dma_device_type": 2 00:26:27.327 }, 00:26:27.327 { 00:26:27.327 "dma_device_id": "system", 00:26:27.327 "dma_device_type": 1 00:26:27.327 }, 00:26:27.327 { 00:26:27.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:27.327 "dma_device_type": 2 00:26:27.327 } 00:26:27.327 ], 00:26:27.327 "driver_specific": { 00:26:27.327 "raid": { 00:26:27.327 "uuid": "34d66c5d-e459-4cfe-bd64-c6989b363af7", 00:26:27.327 "strip_size_kb": 0, 00:26:27.327 "state": "online", 00:26:27.327 "raid_level": "raid1", 00:26:27.327 "superblock": true, 00:26:27.327 "num_base_bdevs": 2, 00:26:27.327 "num_base_bdevs_discovered": 2, 00:26:27.327 "num_base_bdevs_operational": 2, 00:26:27.327 "base_bdevs_list": [ 00:26:27.327 { 00:26:27.327 "name": "pt1", 00:26:27.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:27.327 "is_configured": true, 00:26:27.327 "data_offset": 256, 00:26:27.327 "data_size": 7936 00:26:27.327 }, 00:26:27.327 { 00:26:27.327 "name": "pt2", 00:26:27.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:27.327 "is_configured": true, 00:26:27.327 "data_offset": 256, 00:26:27.327 "data_size": 7936 00:26:27.327 } 00:26:27.327 ] 00:26:27.327 } 00:26:27.327 } 00:26:27.327 }' 00:26:27.327 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:27.327 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:27.327 pt2' 00:26:27.327 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:27.327 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:27.328 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:27.587 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:27.587 "name": "pt1", 00:26:27.587 "aliases": [ 00:26:27.587 "00000000-0000-0000-0000-000000000001" 00:26:27.587 ], 00:26:27.587 "product_name": "passthru", 00:26:27.587 "block_size": 4096, 00:26:27.587 "num_blocks": 8192, 00:26:27.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:27.587 "assigned_rate_limits": { 00:26:27.587 "rw_ios_per_sec": 0, 00:26:27.587 "rw_mbytes_per_sec": 0, 00:26:27.587 "r_mbytes_per_sec": 0, 00:26:27.587 "w_mbytes_per_sec": 0 00:26:27.587 }, 00:26:27.587 "claimed": true, 00:26:27.587 "claim_type": "exclusive_write", 00:26:27.587 "zoned": false, 00:26:27.587 "supported_io_types": { 00:26:27.587 "read": true, 00:26:27.587 "write": true, 00:26:27.587 "unmap": true, 00:26:27.587 "flush": true, 00:26:27.587 "reset": true, 00:26:27.587 "nvme_admin": false, 00:26:27.587 "nvme_io": false, 00:26:27.587 "nvme_io_md": false, 00:26:27.587 "write_zeroes": true, 00:26:27.587 "zcopy": true, 00:26:27.587 "get_zone_info": false, 00:26:27.587 "zone_management": false, 00:26:27.587 "zone_append": false, 00:26:27.587 "compare": false, 00:26:27.587 "compare_and_write": false, 00:26:27.587 "abort": true, 00:26:27.587 "seek_hole": false, 00:26:27.587 "seek_data": false, 00:26:27.587 "copy": true, 00:26:27.587 "nvme_iov_md": false 00:26:27.587 }, 00:26:27.587 "memory_domains": [ 00:26:27.587 { 00:26:27.587 "dma_device_id": "system", 00:26:27.587 "dma_device_type": 1 00:26:27.587 }, 00:26:27.587 { 00:26:27.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:27.587 "dma_device_type": 2 00:26:27.587 } 00:26:27.587 ], 00:26:27.587 "driver_specific": { 00:26:27.587 "passthru": { 00:26:27.587 "name": "pt1", 00:26:27.587 "base_bdev_name": "malloc1" 00:26:27.587 } 00:26:27.587 } 00:26:27.587 }' 00:26:27.587 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:27.587 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:27.845 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:27.845 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:27.845 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:27.845 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:27.845 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:27.845 06:57:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:27.845 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:27.845 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:27.845 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:28.104 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:28.104 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:28.104 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:28.104 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:28.104 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:28.104 "name": "pt2", 00:26:28.104 "aliases": [ 00:26:28.104 "00000000-0000-0000-0000-000000000002" 00:26:28.104 ], 00:26:28.104 "product_name": "passthru", 00:26:28.104 "block_size": 4096, 00:26:28.104 "num_blocks": 8192, 00:26:28.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:28.104 "assigned_rate_limits": { 00:26:28.104 "rw_ios_per_sec": 0, 00:26:28.104 "rw_mbytes_per_sec": 0, 00:26:28.104 "r_mbytes_per_sec": 0, 00:26:28.104 "w_mbytes_per_sec": 0 00:26:28.104 }, 00:26:28.104 "claimed": true, 00:26:28.104 "claim_type": "exclusive_write", 00:26:28.104 "zoned": false, 00:26:28.104 "supported_io_types": { 00:26:28.104 "read": true, 00:26:28.104 "write": true, 00:26:28.104 "unmap": true, 00:26:28.104 "flush": true, 00:26:28.104 "reset": true, 00:26:28.104 "nvme_admin": false, 00:26:28.104 "nvme_io": false, 00:26:28.104 "nvme_io_md": false, 00:26:28.104 "write_zeroes": true, 00:26:28.104 "zcopy": true, 00:26:28.104 "get_zone_info": false, 00:26:28.104 "zone_management": false, 00:26:28.104 "zone_append": false, 00:26:28.104 "compare": false, 00:26:28.104 "compare_and_write": false, 00:26:28.104 "abort": true, 00:26:28.104 "seek_hole": false, 00:26:28.104 "seek_data": false, 00:26:28.104 "copy": true, 00:26:28.104 "nvme_iov_md": false 00:26:28.104 }, 00:26:28.104 "memory_domains": [ 00:26:28.104 { 00:26:28.104 "dma_device_id": "system", 00:26:28.104 "dma_device_type": 1 00:26:28.104 }, 00:26:28.104 { 00:26:28.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:28.104 "dma_device_type": 2 00:26:28.104 } 00:26:28.104 ], 00:26:28.104 "driver_specific": { 00:26:28.104 "passthru": { 00:26:28.104 "name": "pt2", 00:26:28.104 "base_bdev_name": "malloc2" 00:26:28.104 } 00:26:28.104 } 00:26:28.104 }' 00:26:28.104 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:28.104 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:28.363 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:28.363 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:28.363 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:28.363 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:28.363 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:28.363 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:28.363 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:28.363 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:28.363 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:28.622 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:28.622 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:28.622 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:26:28.622 [2024-08-14 06:57:55.799273] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:28.622 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # '[' 34d66c5d-e459-4cfe-bd64-c6989b363af7 '!=' 34d66c5d-e459-4cfe-bd64-c6989b363af7 ']' 00:26:28.622 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:26:28.622 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:28.622 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:26:28.622 06:57:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:28.881 [2024-08-14 06:57:56.002697] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:28.881 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:28.881 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:28.881 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:28.881 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:28.881 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:28.881 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:28.881 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:28.881 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:28.881 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:28.881 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:28.881 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.881 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.140 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:29.140 "name": "raid_bdev1", 00:26:29.140 "uuid": "34d66c5d-e459-4cfe-bd64-c6989b363af7", 00:26:29.140 "strip_size_kb": 0, 00:26:29.140 "state": "online", 00:26:29.140 "raid_level": "raid1", 00:26:29.140 "superblock": true, 00:26:29.140 "num_base_bdevs": 2, 00:26:29.140 "num_base_bdevs_discovered": 1, 00:26:29.140 "num_base_bdevs_operational": 1, 00:26:29.140 "base_bdevs_list": [ 00:26:29.140 { 00:26:29.140 "name": null, 00:26:29.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.140 "is_configured": false, 00:26:29.140 "data_offset": 256, 00:26:29.140 "data_size": 7936 00:26:29.140 }, 00:26:29.140 { 00:26:29.140 "name": "pt2", 00:26:29.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:29.140 "is_configured": true, 00:26:29.140 "data_offset": 256, 00:26:29.140 "data_size": 7936 00:26:29.140 } 00:26:29.140 ] 00:26:29.140 }' 00:26:29.140 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:29.140 06:57:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:29.706 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:29.964 [2024-08-14 06:57:56.977008] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:29.964 [2024-08-14 06:57:56.977136] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:29.964 [2024-08-14 06:57:56.977263] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:29.964 [2024-08-14 06:57:56.977340] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:29.964 [2024-08-14 06:57:56.977394] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:26:29.964 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.964 06:57:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:26:29.964 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:26:29.964 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:26:29.964 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:29.964 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:29.964 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:30.222 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:30.222 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:30.222 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:26:30.222 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:26:30.222 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@534 -- # i=1 00:26:30.222 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:30.481 [2024-08-14 06:57:57.579967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:30.481 [2024-08-14 06:57:57.580138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.481 [2024-08-14 06:57:57.580184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:26:30.481 [2024-08-14 06:57:57.580221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.481 [2024-08-14 06:57:57.582334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.481 [2024-08-14 06:57:57.582436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:30.481 [2024-08-14 06:57:57.582546] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:30.481 [2024-08-14 06:57:57.582614] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:30.481 [2024-08-14 06:57:57.582724] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:26:30.481 [2024-08-14 06:57:57.582765] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:30.481 [2024-08-14 06:57:57.583021] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:26:30.481 [2024-08-14 06:57:57.583187] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:26:30.481 [2024-08-14 06:57:57.583229] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:26:30.481 [2024-08-14 06:57:57.583374] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:30.481 pt2 00:26:30.481 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:30.481 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:30.481 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:30.481 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:30.481 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:30.481 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:30.481 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:30.481 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:30.481 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:30.481 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:30.481 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.481 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.740 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:30.740 "name": "raid_bdev1", 00:26:30.740 "uuid": "34d66c5d-e459-4cfe-bd64-c6989b363af7", 00:26:30.740 "strip_size_kb": 0, 00:26:30.740 "state": "online", 00:26:30.740 "raid_level": "raid1", 00:26:30.740 "superblock": true, 00:26:30.740 "num_base_bdevs": 2, 00:26:30.740 "num_base_bdevs_discovered": 1, 00:26:30.740 "num_base_bdevs_operational": 1, 00:26:30.740 "base_bdevs_list": [ 00:26:30.740 { 00:26:30.740 "name": null, 00:26:30.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.740 "is_configured": false, 00:26:30.740 "data_offset": 256, 00:26:30.740 "data_size": 7936 00:26:30.740 }, 00:26:30.740 { 00:26:30.740 "name": "pt2", 00:26:30.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:30.740 "is_configured": true, 00:26:30.740 "data_offset": 256, 00:26:30.740 "data_size": 7936 00:26:30.740 } 00:26:30.740 ] 00:26:30.740 }' 00:26:30.740 06:57:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:30.740 06:57:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:31.342 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:31.342 [2024-08-14 06:57:58.526333] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:31.342 [2024-08-14 06:57:58.526456] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:31.342 [2024-08-14 06:57:58.526557] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:31.342 [2024-08-14 06:57:58.526625] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:31.342 [2024-08-14 06:57:58.526676] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:26:31.342 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.342 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:26:31.602 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:26:31.602 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:26:31.602 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:26:31.602 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:31.861 [2024-08-14 06:57:58.921685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:31.861 [2024-08-14 06:57:58.921760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.861 [2024-08-14 06:57:58.921779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:26:31.861 [2024-08-14 06:57:58.921788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.861 [2024-08-14 06:57:58.924046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.861 [2024-08-14 06:57:58.924121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:31.861 [2024-08-14 06:57:58.924234] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:31.861 [2024-08-14 06:57:58.924312] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:31.861 [2024-08-14 06:57:58.924467] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:31.861 [2024-08-14 06:57:58.924521] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:31.861 [2024-08-14 06:57:58.924595] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:26:31.861 [2024-08-14 06:57:58.924666] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:31.861 [2024-08-14 06:57:58.924776] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:26:31.861 [2024-08-14 06:57:58.924814] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:31.861 [2024-08-14 06:57:58.925062] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:26:31.861 [2024-08-14 06:57:58.925233] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:26:31.861 [2024-08-14 06:57:58.925281] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:26:31.861 [2024-08-14 06:57:58.925417] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:31.861 pt1 00:26:31.861 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:26:31.861 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:31.861 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:31.861 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:31.861 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:31.861 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:31.861 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:31.861 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:31.861 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:31.861 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:31.861 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:31.861 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.861 06:57:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.121 06:57:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:32.121 "name": "raid_bdev1", 00:26:32.121 "uuid": "34d66c5d-e459-4cfe-bd64-c6989b363af7", 00:26:32.121 "strip_size_kb": 0, 00:26:32.121 "state": "online", 00:26:32.121 "raid_level": "raid1", 00:26:32.121 "superblock": true, 00:26:32.121 "num_base_bdevs": 2, 00:26:32.121 "num_base_bdevs_discovered": 1, 00:26:32.121 "num_base_bdevs_operational": 1, 00:26:32.121 "base_bdevs_list": [ 00:26:32.121 { 00:26:32.121 "name": null, 00:26:32.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.121 "is_configured": false, 00:26:32.121 "data_offset": 256, 00:26:32.121 "data_size": 7936 00:26:32.121 }, 00:26:32.121 { 00:26:32.121 "name": "pt2", 00:26:32.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:32.121 "is_configured": true, 00:26:32.121 "data_offset": 256, 00:26:32.121 "data_size": 7936 00:26:32.121 } 00:26:32.121 ] 00:26:32.121 }' 00:26:32.121 06:57:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:32.121 06:57:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:32.690 06:57:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:32.690 06:57:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:32.949 06:57:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:26:32.949 06:57:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:26:32.949 06:57:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:32.949 [2024-08-14 06:58:00.139814] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:32.949 06:58:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # '[' 34d66c5d-e459-4cfe-bd64-c6989b363af7 '!=' 34d66c5d-e459-4cfe-bd64-c6989b363af7 ']' 00:26:32.949 06:58:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@578 -- # killprocess 106568 00:26:32.949 06:58:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@946 -- # '[' -z 106568 ']' 00:26:32.949 06:58:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # kill -0 106568 00:26:32.949 06:58:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # uname 00:26:32.949 06:58:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:32.949 06:58:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106568 00:26:32.949 06:58:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:33.208 06:58:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:33.208 06:58:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106568' 00:26:33.208 killing process with pid 106568 00:26:33.208 06:58:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@965 -- # kill 106568 00:26:33.209 [2024-08-14 06:58:00.205032] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:33.209 [2024-08-14 06:58:00.205149] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:33.209 [2024-08-14 06:58:00.205217] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:33.209 [2024-08-14 06:58:00.205229] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:26:33.209 06:58:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # wait 106568 00:26:33.209 [2024-08-14 06:58:00.228102] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:33.468 06:58:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@580 -- # return 0 00:26:33.469 00:26:33.469 real 0m14.061s 00:26:33.469 user 0m25.782s 00:26:33.469 sys 0m2.225s 00:26:33.469 06:58:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:33.469 ************************************ 00:26:33.469 END TEST raid_superblock_test_4k 00:26:33.469 ************************************ 00:26:33.469 06:58:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:33.469 06:58:00 bdev_raid -- bdev/bdev_raid.sh@978 -- # '[' true = true ']' 00:26:33.469 06:58:00 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:26:33.469 06:58:00 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:26:33.469 06:58:00 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:33.469 06:58:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:33.469 ************************************ 00:26:33.469 START TEST raid_rebuild_test_sb_4k 00:26:33.469 ************************************ 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false true 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # local verify=true 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # local strip_size 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # local create_arg 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@594 -- # local data_offset 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # raid_pid=107050 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # waitforlisten 107050 /var/tmp/spdk-raid.sock 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@827 -- # '[' -z 107050 ']' 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:33.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:33.469 06:58:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:33.469 [2024-08-14 06:58:00.635798] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:26:33.469 [2024-08-14 06:58:00.636029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107050 ] 00:26:33.469 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:33.469 Zero copy mechanism will not be used. 00:26:33.728 [2024-08-14 06:58:00.766499] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.729 [2024-08-14 06:58:00.816425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.729 [2024-08-14 06:58:00.859628] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:33.729 [2024-08-14 06:58:00.859752] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:34.297 06:58:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:34.297 06:58:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # return 0 00:26:34.297 06:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:34.297 06:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:26:34.556 BaseBdev1_malloc 00:26:34.556 06:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:34.815 [2024-08-14 06:58:01.864481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:34.815 [2024-08-14 06:58:01.864670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.815 [2024-08-14 06:58:01.864734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:26:34.815 [2024-08-14 06:58:01.864775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.815 [2024-08-14 06:58:01.867130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.815 [2024-08-14 06:58:01.867245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:34.815 BaseBdev1 00:26:34.815 06:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:34.815 06:58:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:26:35.074 BaseBdev2_malloc 00:26:35.074 06:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:35.074 [2024-08-14 06:58:02.292675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:35.074 [2024-08-14 06:58:02.292817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:35.074 [2024-08-14 06:58:02.292861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:35.074 [2024-08-14 06:58:02.292893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:35.074 [2024-08-14 06:58:02.295053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:35.074 [2024-08-14 06:58:02.295138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:35.074 BaseBdev2 00:26:35.074 06:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:26:35.334 spare_malloc 00:26:35.334 06:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:35.593 spare_delay 00:26:35.593 06:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:35.852 [2024-08-14 06:58:02.922122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:35.852 [2024-08-14 06:58:02.922314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:35.852 [2024-08-14 06:58:02.922362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:35.852 [2024-08-14 06:58:02.922394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:35.852 [2024-08-14 06:58:02.924607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:35.852 [2024-08-14 06:58:02.924708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:35.852 spare 00:26:35.852 06:58:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:26:35.852 [2024-08-14 06:58:03.105865] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:36.112 [2024-08-14 06:58:03.107872] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:36.112 [2024-08-14 06:58:03.108107] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:26:36.112 [2024-08-14 06:58:03.108148] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:36.112 [2024-08-14 06:58:03.108513] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:26:36.112 [2024-08-14 06:58:03.108721] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:26:36.112 [2024-08-14 06:58:03.108774] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:26:36.112 [2024-08-14 06:58:03.108968] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:36.112 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:36.112 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:36.112 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:36.112 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:36.112 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:36.112 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:36.112 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:36.112 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:36.112 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:36.112 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:36.112 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.112 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.112 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:36.112 "name": "raid_bdev1", 00:26:36.112 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:36.112 "strip_size_kb": 0, 00:26:36.112 "state": "online", 00:26:36.112 "raid_level": "raid1", 00:26:36.112 "superblock": true, 00:26:36.112 "num_base_bdevs": 2, 00:26:36.112 "num_base_bdevs_discovered": 2, 00:26:36.112 "num_base_bdevs_operational": 2, 00:26:36.112 "base_bdevs_list": [ 00:26:36.112 { 00:26:36.112 "name": "BaseBdev1", 00:26:36.112 "uuid": "fe192336-d9b5-5f6f-9856-1394708a654b", 00:26:36.112 "is_configured": true, 00:26:36.112 "data_offset": 256, 00:26:36.112 "data_size": 7936 00:26:36.112 }, 00:26:36.112 { 00:26:36.112 "name": "BaseBdev2", 00:26:36.112 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:36.112 "is_configured": true, 00:26:36.112 "data_offset": 256, 00:26:36.112 "data_size": 7936 00:26:36.112 } 00:26:36.112 ] 00:26:36.112 }' 00:26:36.112 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:36.112 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:36.680 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:26:36.680 06:58:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:36.955 [2024-08-14 06:58:04.044502] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:36.955 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:26:36.955 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.955 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:37.235 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:26:37.235 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:26:37.235 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:26:37.235 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:26:37.235 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:37.235 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:37.235 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:37.235 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:37.235 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:37.235 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:37.235 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:26:37.235 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:37.235 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:37.235 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:37.235 [2024-08-14 06:58:04.459617] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:26:37.235 /dev/nbd0 00:26:37.235 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # local i 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # break 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:37.493 1+0 records in 00:26:37.493 1+0 records out 00:26:37.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288113 s, 14.2 MB/s 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # size=4096 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # return 0 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:26:37.493 06:58:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:26:38.058 7936+0 records in 00:26:38.058 7936+0 records out 00:26:38.058 32505856 bytes (33 MB, 31 MiB) copied, 0.578984 s, 56.1 MB/s 00:26:38.058 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:38.058 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:38.058 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:38.058 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:38.058 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:26:38.058 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:38.058 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:38.315 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:38.315 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:38.315 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:38.315 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:38.315 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:38.315 [2024-08-14 06:58:05.323276] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:38.315 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:38.315 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:26:38.315 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:26:38.315 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:38.315 [2024-08-14 06:58:05.523082] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:38.316 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:38.316 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:38.316 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:38.316 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:38.316 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:38.316 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:38.316 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:38.316 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:38.316 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:38.316 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:38.316 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.316 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.575 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:38.575 "name": "raid_bdev1", 00:26:38.575 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:38.575 "strip_size_kb": 0, 00:26:38.575 "state": "online", 00:26:38.575 "raid_level": "raid1", 00:26:38.575 "superblock": true, 00:26:38.575 "num_base_bdevs": 2, 00:26:38.575 "num_base_bdevs_discovered": 1, 00:26:38.575 "num_base_bdevs_operational": 1, 00:26:38.575 "base_bdevs_list": [ 00:26:38.575 { 00:26:38.575 "name": null, 00:26:38.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.575 "is_configured": false, 00:26:38.575 "data_offset": 256, 00:26:38.575 "data_size": 7936 00:26:38.575 }, 00:26:38.575 { 00:26:38.575 "name": "BaseBdev2", 00:26:38.575 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:38.575 "is_configured": true, 00:26:38.575 "data_offset": 256, 00:26:38.575 "data_size": 7936 00:26:38.575 } 00:26:38.575 ] 00:26:38.575 }' 00:26:38.575 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:38.575 06:58:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:39.143 06:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:39.402 [2024-08-14 06:58:06.517412] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:39.402 [2024-08-14 06:58:06.521725] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:26:39.402 [2024-08-14 06:58:06.523797] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:39.402 06:58:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:40.340 06:58:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:40.340 06:58:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:40.340 06:58:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:40.340 06:58:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:40.340 06:58:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:40.340 06:58:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.340 06:58:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.599 06:58:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:40.599 "name": "raid_bdev1", 00:26:40.599 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:40.600 "strip_size_kb": 0, 00:26:40.600 "state": "online", 00:26:40.600 "raid_level": "raid1", 00:26:40.600 "superblock": true, 00:26:40.600 "num_base_bdevs": 2, 00:26:40.600 "num_base_bdevs_discovered": 2, 00:26:40.600 "num_base_bdevs_operational": 2, 00:26:40.600 "process": { 00:26:40.600 "type": "rebuild", 00:26:40.600 "target": "spare", 00:26:40.600 "progress": { 00:26:40.600 "blocks": 3072, 00:26:40.600 "percent": 38 00:26:40.600 } 00:26:40.600 }, 00:26:40.600 "base_bdevs_list": [ 00:26:40.600 { 00:26:40.600 "name": "spare", 00:26:40.600 "uuid": "e4be7aca-51d6-5eba-9f57-3d41411a8c79", 00:26:40.600 "is_configured": true, 00:26:40.600 "data_offset": 256, 00:26:40.600 "data_size": 7936 00:26:40.600 }, 00:26:40.600 { 00:26:40.600 "name": "BaseBdev2", 00:26:40.600 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:40.600 "is_configured": true, 00:26:40.600 "data_offset": 256, 00:26:40.600 "data_size": 7936 00:26:40.600 } 00:26:40.600 ] 00:26:40.600 }' 00:26:40.600 06:58:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:40.600 06:58:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:40.600 06:58:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:40.600 06:58:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:40.600 06:58:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:40.858 [2024-08-14 06:58:08.028045] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:40.858 [2024-08-14 06:58:08.031044] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:40.858 [2024-08-14 06:58:08.031108] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:40.858 [2024-08-14 06:58:08.031124] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:40.858 [2024-08-14 06:58:08.031135] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:40.858 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:40.858 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:40.858 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:40.858 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:40.858 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:40.858 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:40.858 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:40.858 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:40.858 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:40.858 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:40.858 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.858 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:41.118 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:41.118 "name": "raid_bdev1", 00:26:41.118 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:41.118 "strip_size_kb": 0, 00:26:41.118 "state": "online", 00:26:41.118 "raid_level": "raid1", 00:26:41.118 "superblock": true, 00:26:41.118 "num_base_bdevs": 2, 00:26:41.118 "num_base_bdevs_discovered": 1, 00:26:41.118 "num_base_bdevs_operational": 1, 00:26:41.118 "base_bdevs_list": [ 00:26:41.118 { 00:26:41.118 "name": null, 00:26:41.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:41.118 "is_configured": false, 00:26:41.118 "data_offset": 256, 00:26:41.118 "data_size": 7936 00:26:41.118 }, 00:26:41.118 { 00:26:41.118 "name": "BaseBdev2", 00:26:41.118 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:41.118 "is_configured": true, 00:26:41.118 "data_offset": 256, 00:26:41.118 "data_size": 7936 00:26:41.118 } 00:26:41.118 ] 00:26:41.118 }' 00:26:41.118 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:41.118 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:41.684 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:41.684 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:41.684 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:41.684 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:41.684 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:41.684 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.684 06:58:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:41.941 06:58:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:41.942 "name": "raid_bdev1", 00:26:41.942 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:41.942 "strip_size_kb": 0, 00:26:41.942 "state": "online", 00:26:41.942 "raid_level": "raid1", 00:26:41.942 "superblock": true, 00:26:41.942 "num_base_bdevs": 2, 00:26:41.942 "num_base_bdevs_discovered": 1, 00:26:41.942 "num_base_bdevs_operational": 1, 00:26:41.942 "base_bdevs_list": [ 00:26:41.942 { 00:26:41.942 "name": null, 00:26:41.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:41.942 "is_configured": false, 00:26:41.942 "data_offset": 256, 00:26:41.942 "data_size": 7936 00:26:41.942 }, 00:26:41.942 { 00:26:41.942 "name": "BaseBdev2", 00:26:41.942 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:41.942 "is_configured": true, 00:26:41.942 "data_offset": 256, 00:26:41.942 "data_size": 7936 00:26:41.942 } 00:26:41.942 ] 00:26:41.942 }' 00:26:41.942 06:58:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:41.942 06:58:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:41.942 06:58:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:41.942 06:58:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:41.942 06:58:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:42.201 [2024-08-14 06:58:09.373705] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:42.201 [2024-08-14 06:58:09.378148] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:26:42.201 [2024-08-14 06:58:09.380278] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:42.201 06:58:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@678 -- # sleep 1 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:43.577 "name": "raid_bdev1", 00:26:43.577 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:43.577 "strip_size_kb": 0, 00:26:43.577 "state": "online", 00:26:43.577 "raid_level": "raid1", 00:26:43.577 "superblock": true, 00:26:43.577 "num_base_bdevs": 2, 00:26:43.577 "num_base_bdevs_discovered": 2, 00:26:43.577 "num_base_bdevs_operational": 2, 00:26:43.577 "process": { 00:26:43.577 "type": "rebuild", 00:26:43.577 "target": "spare", 00:26:43.577 "progress": { 00:26:43.577 "blocks": 3072, 00:26:43.577 "percent": 38 00:26:43.577 } 00:26:43.577 }, 00:26:43.577 "base_bdevs_list": [ 00:26:43.577 { 00:26:43.577 "name": "spare", 00:26:43.577 "uuid": "e4be7aca-51d6-5eba-9f57-3d41411a8c79", 00:26:43.577 "is_configured": true, 00:26:43.577 "data_offset": 256, 00:26:43.577 "data_size": 7936 00:26:43.577 }, 00:26:43.577 { 00:26:43.577 "name": "BaseBdev2", 00:26:43.577 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:43.577 "is_configured": true, 00:26:43.577 "data_offset": 256, 00:26:43.577 "data_size": 7936 00:26:43.577 } 00:26:43.577 ] 00:26:43.577 }' 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:26:43.577 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # local timeout=1249 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.577 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.836 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:43.836 "name": "raid_bdev1", 00:26:43.836 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:43.836 "strip_size_kb": 0, 00:26:43.836 "state": "online", 00:26:43.836 "raid_level": "raid1", 00:26:43.836 "superblock": true, 00:26:43.836 "num_base_bdevs": 2, 00:26:43.836 "num_base_bdevs_discovered": 2, 00:26:43.836 "num_base_bdevs_operational": 2, 00:26:43.836 "process": { 00:26:43.836 "type": "rebuild", 00:26:43.836 "target": "spare", 00:26:43.836 "progress": { 00:26:43.836 "blocks": 3840, 00:26:43.836 "percent": 48 00:26:43.836 } 00:26:43.836 }, 00:26:43.836 "base_bdevs_list": [ 00:26:43.836 { 00:26:43.836 "name": "spare", 00:26:43.836 "uuid": "e4be7aca-51d6-5eba-9f57-3d41411a8c79", 00:26:43.836 "is_configured": true, 00:26:43.836 "data_offset": 256, 00:26:43.836 "data_size": 7936 00:26:43.836 }, 00:26:43.836 { 00:26:43.836 "name": "BaseBdev2", 00:26:43.836 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:43.836 "is_configured": true, 00:26:43.836 "data_offset": 256, 00:26:43.836 "data_size": 7936 00:26:43.836 } 00:26:43.836 ] 00:26:43.836 }' 00:26:43.836 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:43.836 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:43.836 06:58:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:43.836 06:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:43.836 06:58:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:45.214 06:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:45.214 06:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:45.214 06:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:45.214 06:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:45.214 06:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:45.214 06:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:45.214 06:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.214 06:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.214 06:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:45.214 "name": "raid_bdev1", 00:26:45.214 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:45.214 "strip_size_kb": 0, 00:26:45.214 "state": "online", 00:26:45.214 "raid_level": "raid1", 00:26:45.214 "superblock": true, 00:26:45.214 "num_base_bdevs": 2, 00:26:45.214 "num_base_bdevs_discovered": 2, 00:26:45.214 "num_base_bdevs_operational": 2, 00:26:45.214 "process": { 00:26:45.214 "type": "rebuild", 00:26:45.214 "target": "spare", 00:26:45.214 "progress": { 00:26:45.214 "blocks": 7168, 00:26:45.214 "percent": 90 00:26:45.214 } 00:26:45.214 }, 00:26:45.214 "base_bdevs_list": [ 00:26:45.214 { 00:26:45.214 "name": "spare", 00:26:45.214 "uuid": "e4be7aca-51d6-5eba-9f57-3d41411a8c79", 00:26:45.214 "is_configured": true, 00:26:45.214 "data_offset": 256, 00:26:45.214 "data_size": 7936 00:26:45.214 }, 00:26:45.214 { 00:26:45.214 "name": "BaseBdev2", 00:26:45.214 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:45.214 "is_configured": true, 00:26:45.214 "data_offset": 256, 00:26:45.214 "data_size": 7936 00:26:45.214 } 00:26:45.214 ] 00:26:45.214 }' 00:26:45.214 06:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:45.214 06:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:45.214 06:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:45.214 06:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:45.214 06:58:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:45.473 [2024-08-14 06:58:12.494418] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:45.473 [2024-08-14 06:58:12.494514] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:45.473 [2024-08-14 06:58:12.494648] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:46.409 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:46.409 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:46.409 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:46.409 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:46.409 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:46.409 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:46.409 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.409 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.409 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:46.409 "name": "raid_bdev1", 00:26:46.409 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:46.409 "strip_size_kb": 0, 00:26:46.409 "state": "online", 00:26:46.409 "raid_level": "raid1", 00:26:46.409 "superblock": true, 00:26:46.409 "num_base_bdevs": 2, 00:26:46.409 "num_base_bdevs_discovered": 2, 00:26:46.409 "num_base_bdevs_operational": 2, 00:26:46.409 "base_bdevs_list": [ 00:26:46.409 { 00:26:46.409 "name": "spare", 00:26:46.409 "uuid": "e4be7aca-51d6-5eba-9f57-3d41411a8c79", 00:26:46.409 "is_configured": true, 00:26:46.409 "data_offset": 256, 00:26:46.409 "data_size": 7936 00:26:46.409 }, 00:26:46.409 { 00:26:46.409 "name": "BaseBdev2", 00:26:46.410 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:46.410 "is_configured": true, 00:26:46.410 "data_offset": 256, 00:26:46.410 "data_size": 7936 00:26:46.410 } 00:26:46.410 ] 00:26:46.410 }' 00:26:46.410 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:46.410 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:46.410 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:46.410 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:46.410 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@724 -- # break 00:26:46.410 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:46.410 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:46.410 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:46.410 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:46.410 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:46.410 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.410 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.671 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:46.671 "name": "raid_bdev1", 00:26:46.671 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:46.671 "strip_size_kb": 0, 00:26:46.671 "state": "online", 00:26:46.671 "raid_level": "raid1", 00:26:46.671 "superblock": true, 00:26:46.671 "num_base_bdevs": 2, 00:26:46.671 "num_base_bdevs_discovered": 2, 00:26:46.671 "num_base_bdevs_operational": 2, 00:26:46.671 "base_bdevs_list": [ 00:26:46.671 { 00:26:46.671 "name": "spare", 00:26:46.671 "uuid": "e4be7aca-51d6-5eba-9f57-3d41411a8c79", 00:26:46.671 "is_configured": true, 00:26:46.671 "data_offset": 256, 00:26:46.671 "data_size": 7936 00:26:46.671 }, 00:26:46.671 { 00:26:46.671 "name": "BaseBdev2", 00:26:46.671 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:46.671 "is_configured": true, 00:26:46.671 "data_offset": 256, 00:26:46.671 "data_size": 7936 00:26:46.671 } 00:26:46.671 ] 00:26:46.671 }' 00:26:46.671 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:46.932 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:46.932 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:46.932 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:46.932 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:46.932 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:46.932 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:46.932 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:46.932 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:46.932 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:46.932 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:46.932 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:46.932 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:46.932 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:46.932 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.932 06:58:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.192 06:58:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:47.192 "name": "raid_bdev1", 00:26:47.192 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:47.192 "strip_size_kb": 0, 00:26:47.192 "state": "online", 00:26:47.192 "raid_level": "raid1", 00:26:47.192 "superblock": true, 00:26:47.192 "num_base_bdevs": 2, 00:26:47.192 "num_base_bdevs_discovered": 2, 00:26:47.192 "num_base_bdevs_operational": 2, 00:26:47.192 "base_bdevs_list": [ 00:26:47.192 { 00:26:47.192 "name": "spare", 00:26:47.192 "uuid": "e4be7aca-51d6-5eba-9f57-3d41411a8c79", 00:26:47.192 "is_configured": true, 00:26:47.192 "data_offset": 256, 00:26:47.192 "data_size": 7936 00:26:47.192 }, 00:26:47.192 { 00:26:47.192 "name": "BaseBdev2", 00:26:47.192 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:47.192 "is_configured": true, 00:26:47.192 "data_offset": 256, 00:26:47.192 "data_size": 7936 00:26:47.192 } 00:26:47.192 ] 00:26:47.192 }' 00:26:47.192 06:58:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:47.192 06:58:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:47.765 06:58:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:47.765 [2024-08-14 06:58:14.955344] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:47.765 [2024-08-14 06:58:14.955397] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:47.765 [2024-08-14 06:58:14.955513] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:47.765 [2024-08-14 06:58:14.955590] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:47.765 [2024-08-14 06:58:14.955602] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:26:47.765 06:58:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # jq length 00:26:47.765 06:58:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.025 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:26:48.025 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:26:48.025 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:26:48.025 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:48.025 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:48.025 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:48.025 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:48.025 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:48.025 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:48.025 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:26:48.025 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:48.025 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:48.025 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:48.285 /dev/nbd0 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # local i 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # break 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:48.285 1+0 records in 00:26:48.285 1+0 records out 00:26:48.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413293 s, 9.9 MB/s 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # size=4096 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # return 0 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:48.285 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:48.545 /dev/nbd1 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # local i 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # break 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:48.545 1+0 records in 00:26:48.545 1+0 records out 00:26:48.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049091 s, 8.3 MB/s 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # size=4096 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # return 0 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:48.545 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:48.805 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:48.805 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:48.805 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:48.805 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:48.805 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:26:48.805 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:48.805 06:58:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:48.805 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:48.805 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:48.805 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:48.805 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:48.805 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:48.805 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:48.805 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:26:48.805 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:26:48.805 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:48.805 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:49.064 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:49.064 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:49.064 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:49.064 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:49.065 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:49.065 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:49.065 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:26:49.065 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:26:49.065 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:26:49.065 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:49.325 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:49.585 [2024-08-14 06:58:16.738856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:49.585 [2024-08-14 06:58:16.738943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:49.585 [2024-08-14 06:58:16.738970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:49.585 [2024-08-14 06:58:16.738980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:49.585 [2024-08-14 06:58:16.741402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:49.585 [2024-08-14 06:58:16.741445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:49.585 [2024-08-14 06:58:16.741550] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:49.585 [2024-08-14 06:58:16.741588] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:49.585 [2024-08-14 06:58:16.741728] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:49.585 spare 00:26:49.585 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:49.585 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:49.585 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:49.585 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:49.585 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:49.585 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:49.585 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:49.585 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:49.585 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:49.585 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:49.585 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.585 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.844 [2024-08-14 06:58:16.841655] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:26:49.844 [2024-08-14 06:58:16.841733] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:49.844 [2024-08-14 06:58:16.842114] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:26:49.844 [2024-08-14 06:58:16.842336] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:26:49.844 [2024-08-14 06:58:16.842366] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:26:49.844 [2024-08-14 06:58:16.842538] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:49.844 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:49.844 "name": "raid_bdev1", 00:26:49.844 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:49.844 "strip_size_kb": 0, 00:26:49.844 "state": "online", 00:26:49.844 "raid_level": "raid1", 00:26:49.844 "superblock": true, 00:26:49.844 "num_base_bdevs": 2, 00:26:49.844 "num_base_bdevs_discovered": 2, 00:26:49.845 "num_base_bdevs_operational": 2, 00:26:49.845 "base_bdevs_list": [ 00:26:49.845 { 00:26:49.845 "name": "spare", 00:26:49.845 "uuid": "e4be7aca-51d6-5eba-9f57-3d41411a8c79", 00:26:49.845 "is_configured": true, 00:26:49.845 "data_offset": 256, 00:26:49.845 "data_size": 7936 00:26:49.845 }, 00:26:49.845 { 00:26:49.845 "name": "BaseBdev2", 00:26:49.845 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:49.845 "is_configured": true, 00:26:49.845 "data_offset": 256, 00:26:49.845 "data_size": 7936 00:26:49.845 } 00:26:49.845 ] 00:26:49.845 }' 00:26:49.845 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:49.845 06:58:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:50.413 06:58:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:50.413 06:58:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:50.413 06:58:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:50.413 06:58:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:50.413 06:58:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:50.413 06:58:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.413 06:58:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.672 06:58:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:50.672 "name": "raid_bdev1", 00:26:50.672 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:50.672 "strip_size_kb": 0, 00:26:50.672 "state": "online", 00:26:50.672 "raid_level": "raid1", 00:26:50.672 "superblock": true, 00:26:50.672 "num_base_bdevs": 2, 00:26:50.672 "num_base_bdevs_discovered": 2, 00:26:50.672 "num_base_bdevs_operational": 2, 00:26:50.672 "base_bdevs_list": [ 00:26:50.672 { 00:26:50.672 "name": "spare", 00:26:50.672 "uuid": "e4be7aca-51d6-5eba-9f57-3d41411a8c79", 00:26:50.672 "is_configured": true, 00:26:50.672 "data_offset": 256, 00:26:50.672 "data_size": 7936 00:26:50.672 }, 00:26:50.672 { 00:26:50.672 "name": "BaseBdev2", 00:26:50.672 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:50.672 "is_configured": true, 00:26:50.672 "data_offset": 256, 00:26:50.672 "data_size": 7936 00:26:50.672 } 00:26:50.672 ] 00:26:50.672 }' 00:26:50.672 06:58:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:50.672 06:58:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:50.672 06:58:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:50.672 06:58:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:50.672 06:58:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.672 06:58:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:50.932 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:26:50.932 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:51.192 [2024-08-14 06:58:18.296431] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:51.192 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:51.192 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:51.192 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:51.192 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:51.192 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:51.192 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:51.192 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:51.192 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:51.192 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:51.192 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:51.192 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.192 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.452 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:51.452 "name": "raid_bdev1", 00:26:51.452 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:51.452 "strip_size_kb": 0, 00:26:51.452 "state": "online", 00:26:51.452 "raid_level": "raid1", 00:26:51.452 "superblock": true, 00:26:51.452 "num_base_bdevs": 2, 00:26:51.452 "num_base_bdevs_discovered": 1, 00:26:51.452 "num_base_bdevs_operational": 1, 00:26:51.452 "base_bdevs_list": [ 00:26:51.452 { 00:26:51.452 "name": null, 00:26:51.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.452 "is_configured": false, 00:26:51.452 "data_offset": 256, 00:26:51.452 "data_size": 7936 00:26:51.452 }, 00:26:51.452 { 00:26:51.452 "name": "BaseBdev2", 00:26:51.452 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:51.452 "is_configured": true, 00:26:51.452 "data_offset": 256, 00:26:51.452 "data_size": 7936 00:26:51.452 } 00:26:51.452 ] 00:26:51.452 }' 00:26:51.452 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:51.452 06:58:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:52.021 06:58:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:52.280 [2024-08-14 06:58:19.298874] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:52.280 [2024-08-14 06:58:19.299106] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:52.280 [2024-08-14 06:58:19.299127] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:52.280 [2024-08-14 06:58:19.299188] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:52.280 [2024-08-14 06:58:19.303659] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:26:52.280 [2024-08-14 06:58:19.305805] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:52.280 06:58:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # sleep 1 00:26:53.219 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:53.219 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:53.219 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:53.219 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:53.219 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:53.219 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.219 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.478 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:53.478 "name": "raid_bdev1", 00:26:53.478 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:53.478 "strip_size_kb": 0, 00:26:53.478 "state": "online", 00:26:53.478 "raid_level": "raid1", 00:26:53.478 "superblock": true, 00:26:53.478 "num_base_bdevs": 2, 00:26:53.478 "num_base_bdevs_discovered": 2, 00:26:53.478 "num_base_bdevs_operational": 2, 00:26:53.478 "process": { 00:26:53.478 "type": "rebuild", 00:26:53.478 "target": "spare", 00:26:53.478 "progress": { 00:26:53.478 "blocks": 3072, 00:26:53.478 "percent": 38 00:26:53.478 } 00:26:53.478 }, 00:26:53.478 "base_bdevs_list": [ 00:26:53.478 { 00:26:53.478 "name": "spare", 00:26:53.478 "uuid": "e4be7aca-51d6-5eba-9f57-3d41411a8c79", 00:26:53.478 "is_configured": true, 00:26:53.478 "data_offset": 256, 00:26:53.478 "data_size": 7936 00:26:53.478 }, 00:26:53.479 { 00:26:53.479 "name": "BaseBdev2", 00:26:53.479 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:53.479 "is_configured": true, 00:26:53.479 "data_offset": 256, 00:26:53.479 "data_size": 7936 00:26:53.479 } 00:26:53.479 ] 00:26:53.479 }' 00:26:53.479 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:53.479 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:53.479 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:53.479 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:53.479 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:53.738 [2024-08-14 06:58:20.862308] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:53.738 [2024-08-14 06:58:20.913246] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:53.738 [2024-08-14 06:58:20.913375] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:53.738 [2024-08-14 06:58:20.913394] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:53.738 [2024-08-14 06:58:20.913405] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:53.738 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:53.738 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:53.738 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:53.738 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:53.738 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:53.738 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:53.738 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:53.738 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:53.738 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:53.738 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:53.738 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.738 06:58:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.997 06:58:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:53.997 "name": "raid_bdev1", 00:26:53.997 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:53.997 "strip_size_kb": 0, 00:26:53.997 "state": "online", 00:26:53.997 "raid_level": "raid1", 00:26:53.997 "superblock": true, 00:26:53.997 "num_base_bdevs": 2, 00:26:53.997 "num_base_bdevs_discovered": 1, 00:26:53.997 "num_base_bdevs_operational": 1, 00:26:53.997 "base_bdevs_list": [ 00:26:53.997 { 00:26:53.997 "name": null, 00:26:53.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.997 "is_configured": false, 00:26:53.997 "data_offset": 256, 00:26:53.997 "data_size": 7936 00:26:53.997 }, 00:26:53.997 { 00:26:53.997 "name": "BaseBdev2", 00:26:53.997 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:53.997 "is_configured": true, 00:26:53.997 "data_offset": 256, 00:26:53.997 "data_size": 7936 00:26:53.997 } 00:26:53.997 ] 00:26:53.997 }' 00:26:53.997 06:58:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:53.997 06:58:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:54.566 06:58:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:54.825 [2024-08-14 06:58:21.968271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:54.825 [2024-08-14 06:58:21.968366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:54.825 [2024-08-14 06:58:21.968393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:54.825 [2024-08-14 06:58:21.968406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:54.825 [2024-08-14 06:58:21.968873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:54.825 [2024-08-14 06:58:21.968896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:54.825 [2024-08-14 06:58:21.968992] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:54.825 [2024-08-14 06:58:21.969011] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:54.825 [2024-08-14 06:58:21.969022] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:54.825 [2024-08-14 06:58:21.969048] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:54.825 [2024-08-14 06:58:21.973496] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:26:54.825 spare 00:26:54.825 [2024-08-14 06:58:21.975689] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:54.825 06:58:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # sleep 1 00:26:55.762 06:58:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:55.762 06:58:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:55.762 06:58:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:55.762 06:58:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:55.762 06:58:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:55.762 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.762 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.022 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:56.022 "name": "raid_bdev1", 00:26:56.022 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:56.022 "strip_size_kb": 0, 00:26:56.022 "state": "online", 00:26:56.022 "raid_level": "raid1", 00:26:56.022 "superblock": true, 00:26:56.022 "num_base_bdevs": 2, 00:26:56.022 "num_base_bdevs_discovered": 2, 00:26:56.022 "num_base_bdevs_operational": 2, 00:26:56.022 "process": { 00:26:56.022 "type": "rebuild", 00:26:56.022 "target": "spare", 00:26:56.022 "progress": { 00:26:56.022 "blocks": 3072, 00:26:56.022 "percent": 38 00:26:56.022 } 00:26:56.022 }, 00:26:56.022 "base_bdevs_list": [ 00:26:56.022 { 00:26:56.022 "name": "spare", 00:26:56.022 "uuid": "e4be7aca-51d6-5eba-9f57-3d41411a8c79", 00:26:56.022 "is_configured": true, 00:26:56.022 "data_offset": 256, 00:26:56.022 "data_size": 7936 00:26:56.022 }, 00:26:56.022 { 00:26:56.022 "name": "BaseBdev2", 00:26:56.022 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:56.022 "is_configured": true, 00:26:56.022 "data_offset": 256, 00:26:56.022 "data_size": 7936 00:26:56.022 } 00:26:56.022 ] 00:26:56.022 }' 00:26:56.022 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:56.281 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:56.281 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:56.281 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:56.281 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:56.281 [2024-08-14 06:58:23.528952] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:56.541 [2024-08-14 06:58:23.582520] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:56.541 [2024-08-14 06:58:23.582723] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:56.541 [2024-08-14 06:58:23.582787] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:56.541 [2024-08-14 06:58:23.582813] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:56.541 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:56.541 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:56.541 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:56.541 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:56.541 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:56.541 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:56.541 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:56.541 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:56.541 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:56.541 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:56.541 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.541 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.800 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:56.800 "name": "raid_bdev1", 00:26:56.800 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:56.800 "strip_size_kb": 0, 00:26:56.800 "state": "online", 00:26:56.800 "raid_level": "raid1", 00:26:56.800 "superblock": true, 00:26:56.800 "num_base_bdevs": 2, 00:26:56.800 "num_base_bdevs_discovered": 1, 00:26:56.800 "num_base_bdevs_operational": 1, 00:26:56.800 "base_bdevs_list": [ 00:26:56.800 { 00:26:56.800 "name": null, 00:26:56.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.800 "is_configured": false, 00:26:56.800 "data_offset": 256, 00:26:56.800 "data_size": 7936 00:26:56.800 }, 00:26:56.800 { 00:26:56.800 "name": "BaseBdev2", 00:26:56.800 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:56.800 "is_configured": true, 00:26:56.800 "data_offset": 256, 00:26:56.800 "data_size": 7936 00:26:56.800 } 00:26:56.800 ] 00:26:56.800 }' 00:26:56.800 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:56.800 06:58:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:57.369 06:58:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:57.369 06:58:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:57.369 06:58:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:57.369 06:58:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:57.369 06:58:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:57.369 06:58:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.369 06:58:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.629 06:58:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:57.629 "name": "raid_bdev1", 00:26:57.629 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:57.629 "strip_size_kb": 0, 00:26:57.629 "state": "online", 00:26:57.629 "raid_level": "raid1", 00:26:57.629 "superblock": true, 00:26:57.629 "num_base_bdevs": 2, 00:26:57.629 "num_base_bdevs_discovered": 1, 00:26:57.629 "num_base_bdevs_operational": 1, 00:26:57.629 "base_bdevs_list": [ 00:26:57.629 { 00:26:57.629 "name": null, 00:26:57.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.629 "is_configured": false, 00:26:57.629 "data_offset": 256, 00:26:57.629 "data_size": 7936 00:26:57.629 }, 00:26:57.629 { 00:26:57.629 "name": "BaseBdev2", 00:26:57.629 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:57.629 "is_configured": true, 00:26:57.629 "data_offset": 256, 00:26:57.629 "data_size": 7936 00:26:57.629 } 00:26:57.629 ] 00:26:57.629 }' 00:26:57.629 06:58:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:57.629 06:58:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:57.629 06:58:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:57.629 06:58:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:57.629 06:58:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:57.888 06:58:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:58.147 [2024-08-14 06:58:25.228578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:58.147 [2024-08-14 06:58:25.228776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:58.147 [2024-08-14 06:58:25.228811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:26:58.147 [2024-08-14 06:58:25.228822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:58.147 [2024-08-14 06:58:25.229288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:58.147 [2024-08-14 06:58:25.229310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:58.147 [2024-08-14 06:58:25.229404] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:58.147 [2024-08-14 06:58:25.229430] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:58.147 [2024-08-14 06:58:25.229444] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:58.147 BaseBdev1 00:26:58.147 06:58:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@789 -- # sleep 1 00:26:59.085 06:58:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:59.085 06:58:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:59.085 06:58:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:59.085 06:58:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:59.085 06:58:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:59.085 06:58:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:59.085 06:58:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:59.085 06:58:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:59.085 06:58:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:59.085 06:58:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:59.085 06:58:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.085 06:58:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.352 06:58:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.352 "name": "raid_bdev1", 00:26:59.352 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:26:59.352 "strip_size_kb": 0, 00:26:59.352 "state": "online", 00:26:59.352 "raid_level": "raid1", 00:26:59.352 "superblock": true, 00:26:59.352 "num_base_bdevs": 2, 00:26:59.352 "num_base_bdevs_discovered": 1, 00:26:59.352 "num_base_bdevs_operational": 1, 00:26:59.352 "base_bdevs_list": [ 00:26:59.352 { 00:26:59.352 "name": null, 00:26:59.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.352 "is_configured": false, 00:26:59.352 "data_offset": 256, 00:26:59.352 "data_size": 7936 00:26:59.352 }, 00:26:59.352 { 00:26:59.352 "name": "BaseBdev2", 00:26:59.352 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:26:59.352 "is_configured": true, 00:26:59.352 "data_offset": 256, 00:26:59.352 "data_size": 7936 00:26:59.352 } 00:26:59.352 ] 00:26:59.352 }' 00:26:59.352 06:58:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.352 06:58:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:59.936 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:59.936 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:59.936 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:59.936 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:59.936 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:59.936 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.936 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.195 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:00.195 "name": "raid_bdev1", 00:27:00.195 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:27:00.195 "strip_size_kb": 0, 00:27:00.195 "state": "online", 00:27:00.195 "raid_level": "raid1", 00:27:00.195 "superblock": true, 00:27:00.195 "num_base_bdevs": 2, 00:27:00.195 "num_base_bdevs_discovered": 1, 00:27:00.195 "num_base_bdevs_operational": 1, 00:27:00.195 "base_bdevs_list": [ 00:27:00.195 { 00:27:00.195 "name": null, 00:27:00.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.195 "is_configured": false, 00:27:00.195 "data_offset": 256, 00:27:00.195 "data_size": 7936 00:27:00.195 }, 00:27:00.195 { 00:27:00.195 "name": "BaseBdev2", 00:27:00.195 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:27:00.195 "is_configured": true, 00:27:00.195 "data_offset": 256, 00:27:00.195 "data_size": 7936 00:27:00.195 } 00:27:00.195 ] 00:27:00.195 }' 00:27:00.195 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:00.195 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:00.195 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:00.195 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:00.195 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:00.196 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@646 -- # local es=0 00:27:00.196 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:00.196 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:00.196 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:27:00.196 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:00.196 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:27:00.196 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:00.196 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:27:00.196 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:00.196 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:00.196 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:00.455 [2024-08-14 06:58:27.652566] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:00.455 [2024-08-14 06:58:27.652859] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:00.455 [2024-08-14 06:58:27.652942] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:00.455 request: 00:27:00.455 { 00:27:00.455 "base_bdev": "BaseBdev1", 00:27:00.455 "raid_bdev": "raid_bdev1", 00:27:00.455 "method": "bdev_raid_add_base_bdev", 00:27:00.455 "req_id": 1 00:27:00.455 } 00:27:00.455 Got JSON-RPC error response 00:27:00.455 response: 00:27:00.455 { 00:27:00.455 "code": -22, 00:27:00.455 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:00.455 } 00:27:00.455 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@649 -- # es=1 00:27:00.455 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:27:00.455 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:27:00.455 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:27:00.455 06:58:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@793 -- # sleep 1 00:27:01.834 06:58:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:01.834 06:58:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:01.834 06:58:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:01.834 06:58:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:01.835 06:58:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:01.835 06:58:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:01.835 06:58:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:01.835 06:58:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:01.835 06:58:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:01.835 06:58:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:01.835 06:58:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.835 06:58:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.835 06:58:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:01.835 "name": "raid_bdev1", 00:27:01.835 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:27:01.835 "strip_size_kb": 0, 00:27:01.835 "state": "online", 00:27:01.835 "raid_level": "raid1", 00:27:01.835 "superblock": true, 00:27:01.835 "num_base_bdevs": 2, 00:27:01.835 "num_base_bdevs_discovered": 1, 00:27:01.835 "num_base_bdevs_operational": 1, 00:27:01.835 "base_bdevs_list": [ 00:27:01.835 { 00:27:01.835 "name": null, 00:27:01.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.835 "is_configured": false, 00:27:01.835 "data_offset": 256, 00:27:01.835 "data_size": 7936 00:27:01.835 }, 00:27:01.835 { 00:27:01.835 "name": "BaseBdev2", 00:27:01.835 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:27:01.835 "is_configured": true, 00:27:01.835 "data_offset": 256, 00:27:01.835 "data_size": 7936 00:27:01.835 } 00:27:01.835 ] 00:27:01.835 }' 00:27:01.835 06:58:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:01.835 06:58:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.404 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:02.404 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:02.404 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:02.404 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:02.404 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:02.404 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.404 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:02.664 "name": "raid_bdev1", 00:27:02.664 "uuid": "4cc601cb-5ff5-45d5-bd94-5ba29052cb05", 00:27:02.664 "strip_size_kb": 0, 00:27:02.664 "state": "online", 00:27:02.664 "raid_level": "raid1", 00:27:02.664 "superblock": true, 00:27:02.664 "num_base_bdevs": 2, 00:27:02.664 "num_base_bdevs_discovered": 1, 00:27:02.664 "num_base_bdevs_operational": 1, 00:27:02.664 "base_bdevs_list": [ 00:27:02.664 { 00:27:02.664 "name": null, 00:27:02.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.664 "is_configured": false, 00:27:02.664 "data_offset": 256, 00:27:02.664 "data_size": 7936 00:27:02.664 }, 00:27:02.664 { 00:27:02.664 "name": "BaseBdev2", 00:27:02.664 "uuid": "bbdcdbf7-aa09-558d-8569-9df746e4c2cb", 00:27:02.664 "is_configured": true, 00:27:02.664 "data_offset": 256, 00:27:02.664 "data_size": 7936 00:27:02.664 } 00:27:02.664 ] 00:27:02.664 }' 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@798 -- # killprocess 107050 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@946 -- # '[' -z 107050 ']' 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # kill -0 107050 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@951 -- # uname 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107050 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107050' 00:27:02.664 killing process with pid 107050 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@965 -- # kill 107050 00:27:02.664 Received shutdown signal, test time was about 60.000000 seconds 00:27:02.664 00:27:02.664 Latency(us) 00:27:02.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.664 =================================================================================================================== 00:27:02.664 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:02.664 06:58:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # wait 107050 00:27:02.664 [2024-08-14 06:58:29.807919] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:02.664 [2024-08-14 06:58:29.808060] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:02.664 [2024-08-14 06:58:29.808118] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:02.664 [2024-08-14 06:58:29.808129] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:27:02.664 [2024-08-14 06:58:29.840668] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:02.924 06:58:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@800 -- # return 0 00:27:02.924 00:27:02.924 real 0m29.523s 00:27:02.924 user 0m46.441s 00:27:02.924 sys 0m3.841s 00:27:02.924 06:58:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:02.924 06:58:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.924 ************************************ 00:27:02.924 END TEST raid_rebuild_test_sb_4k 00:27:02.924 ************************************ 00:27:02.924 06:58:30 bdev_raid -- bdev/bdev_raid.sh@982 -- # base_malloc_params='-m 32' 00:27:02.924 06:58:30 bdev_raid -- bdev/bdev_raid.sh@983 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:27:02.924 06:58:30 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:27:02.924 06:58:30 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:02.924 06:58:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:02.924 ************************************ 00:27:02.924 START TEST raid_state_function_test_sb_md_separate 00:27:02.924 ************************************ 00:27:02.924 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:27:02.925 Process raid pid: 107859 00:27:02.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=107859 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 107859' 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 107859 /var/tmp/spdk-raid.sock 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@827 -- # '[' -z 107859 ']' 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:02.925 06:58:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:03.184 [2024-08-14 06:58:30.223198] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:27:03.184 [2024-08-14 06:58:30.223447] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.184 [2024-08-14 06:58:30.369484] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.184 [2024-08-14 06:58:30.421223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.444 [2024-08-14 06:58:30.464931] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:03.444 [2024-08-14 06:58:30.465046] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:04.020 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:04.020 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # return 0 00:27:04.020 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:04.290 [2024-08-14 06:58:31.297457] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:04.290 [2024-08-14 06:58:31.297603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:04.290 [2024-08-14 06:58:31.297650] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:04.290 [2024-08-14 06:58:31.297673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:04.290 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:04.290 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:04.290 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:04.290 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:04.290 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:04.290 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:04.290 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:04.290 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:04.290 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:04.290 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:04.290 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.290 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:04.290 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:04.290 "name": "Existed_Raid", 00:27:04.290 "uuid": "b7247e98-82f0-4425-8afe-ed8318b27b17", 00:27:04.290 "strip_size_kb": 0, 00:27:04.290 "state": "configuring", 00:27:04.290 "raid_level": "raid1", 00:27:04.290 "superblock": true, 00:27:04.290 "num_base_bdevs": 2, 00:27:04.290 "num_base_bdevs_discovered": 0, 00:27:04.290 "num_base_bdevs_operational": 2, 00:27:04.290 "base_bdevs_list": [ 00:27:04.290 { 00:27:04.290 "name": "BaseBdev1", 00:27:04.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.290 "is_configured": false, 00:27:04.290 "data_offset": 0, 00:27:04.290 "data_size": 0 00:27:04.290 }, 00:27:04.290 { 00:27:04.290 "name": "BaseBdev2", 00:27:04.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.290 "is_configured": false, 00:27:04.290 "data_offset": 0, 00:27:04.290 "data_size": 0 00:27:04.290 } 00:27:04.290 ] 00:27:04.290 }' 00:27:04.290 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:04.290 06:58:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:04.859 06:58:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:05.117 [2024-08-14 06:58:32.223744] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:05.117 [2024-08-14 06:58:32.223876] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:27:05.118 06:58:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:05.376 [2024-08-14 06:58:32.439387] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:05.376 [2024-08-14 06:58:32.439522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:05.376 [2024-08-14 06:58:32.439551] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:05.376 [2024-08-14 06:58:32.439561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:05.376 06:58:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:27:05.635 [2024-08-14 06:58:32.676626] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:05.635 BaseBdev1 00:27:05.636 06:58:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:27:05.636 06:58:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:27:05.636 06:58:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:05.636 06:58:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:27:05.636 06:58:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:05.636 06:58:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:05.636 06:58:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:05.895 06:58:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:05.895 [ 00:27:05.895 { 00:27:05.895 "name": "BaseBdev1", 00:27:05.895 "aliases": [ 00:27:05.895 "0ce076e5-6ec7-4c63-a58a-87990a8e5682" 00:27:05.895 ], 00:27:05.895 "product_name": "Malloc disk", 00:27:05.895 "block_size": 4096, 00:27:05.895 "num_blocks": 8192, 00:27:05.895 "uuid": "0ce076e5-6ec7-4c63-a58a-87990a8e5682", 00:27:05.895 "md_size": 32, 00:27:05.895 "md_interleave": false, 00:27:05.895 "dif_type": 0, 00:27:05.895 "assigned_rate_limits": { 00:27:05.896 "rw_ios_per_sec": 0, 00:27:05.896 "rw_mbytes_per_sec": 0, 00:27:05.896 "r_mbytes_per_sec": 0, 00:27:05.896 "w_mbytes_per_sec": 0 00:27:05.896 }, 00:27:05.896 "claimed": true, 00:27:05.896 "claim_type": "exclusive_write", 00:27:05.896 "zoned": false, 00:27:05.896 "supported_io_types": { 00:27:05.896 "read": true, 00:27:05.896 "write": true, 00:27:05.896 "unmap": true, 00:27:05.896 "flush": true, 00:27:05.896 "reset": true, 00:27:05.896 "nvme_admin": false, 00:27:05.896 "nvme_io": false, 00:27:05.896 "nvme_io_md": false, 00:27:05.896 "write_zeroes": true, 00:27:05.896 "zcopy": true, 00:27:05.896 "get_zone_info": false, 00:27:05.896 "zone_management": false, 00:27:05.896 "zone_append": false, 00:27:05.896 "compare": false, 00:27:05.896 "compare_and_write": false, 00:27:05.896 "abort": true, 00:27:05.896 "seek_hole": false, 00:27:05.896 "seek_data": false, 00:27:05.896 "copy": true, 00:27:05.896 "nvme_iov_md": false 00:27:05.896 }, 00:27:05.896 "memory_domains": [ 00:27:05.896 { 00:27:05.896 "dma_device_id": "system", 00:27:05.896 "dma_device_type": 1 00:27:05.896 }, 00:27:05.896 { 00:27:05.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.896 "dma_device_type": 2 00:27:05.896 } 00:27:05.896 ], 00:27:05.896 "driver_specific": {} 00:27:05.896 } 00:27:05.896 ] 00:27:05.896 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:27:05.896 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:05.896 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:05.896 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:05.896 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:05.896 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:05.896 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:05.896 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:05.896 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:05.896 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:05.896 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:05.896 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:05.896 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.158 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:06.158 "name": "Existed_Raid", 00:27:06.158 "uuid": "3f8f2028-b724-4cc5-9f90-d6917e4bc1d4", 00:27:06.158 "strip_size_kb": 0, 00:27:06.158 "state": "configuring", 00:27:06.158 "raid_level": "raid1", 00:27:06.158 "superblock": true, 00:27:06.158 "num_base_bdevs": 2, 00:27:06.158 "num_base_bdevs_discovered": 1, 00:27:06.158 "num_base_bdevs_operational": 2, 00:27:06.158 "base_bdevs_list": [ 00:27:06.158 { 00:27:06.158 "name": "BaseBdev1", 00:27:06.158 "uuid": "0ce076e5-6ec7-4c63-a58a-87990a8e5682", 00:27:06.158 "is_configured": true, 00:27:06.158 "data_offset": 256, 00:27:06.158 "data_size": 7936 00:27:06.158 }, 00:27:06.158 { 00:27:06.158 "name": "BaseBdev2", 00:27:06.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.158 "is_configured": false, 00:27:06.158 "data_offset": 0, 00:27:06.158 "data_size": 0 00:27:06.158 } 00:27:06.158 ] 00:27:06.158 }' 00:27:06.158 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:06.158 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:06.726 06:58:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:06.984 [2024-08-14 06:58:34.098275] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:06.984 [2024-08-14 06:58:34.098433] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:27:06.984 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:07.243 [2024-08-14 06:58:34.310053] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:07.243 [2024-08-14 06:58:34.312236] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:07.243 [2024-08-14 06:58:34.312332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:07.243 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:27:07.243 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:07.243 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:07.243 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:07.243 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:07.243 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:07.243 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:07.243 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:07.243 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:07.243 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:07.243 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:07.243 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:07.243 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.243 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:07.501 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:07.501 "name": "Existed_Raid", 00:27:07.501 "uuid": "d874a58a-fdf9-428b-8f37-3d6c68260b0d", 00:27:07.501 "strip_size_kb": 0, 00:27:07.501 "state": "configuring", 00:27:07.501 "raid_level": "raid1", 00:27:07.501 "superblock": true, 00:27:07.501 "num_base_bdevs": 2, 00:27:07.501 "num_base_bdevs_discovered": 1, 00:27:07.501 "num_base_bdevs_operational": 2, 00:27:07.501 "base_bdevs_list": [ 00:27:07.501 { 00:27:07.501 "name": "BaseBdev1", 00:27:07.501 "uuid": "0ce076e5-6ec7-4c63-a58a-87990a8e5682", 00:27:07.501 "is_configured": true, 00:27:07.501 "data_offset": 256, 00:27:07.501 "data_size": 7936 00:27:07.501 }, 00:27:07.501 { 00:27:07.501 "name": "BaseBdev2", 00:27:07.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:07.501 "is_configured": false, 00:27:07.501 "data_offset": 0, 00:27:07.501 "data_size": 0 00:27:07.501 } 00:27:07.501 ] 00:27:07.501 }' 00:27:07.501 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:07.501 06:58:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:08.068 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:27:08.068 [2024-08-14 06:58:35.213160] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:08.068 [2024-08-14 06:58:35.213398] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:27:08.068 [2024-08-14 06:58:35.213418] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:08.068 [2024-08-14 06:58:35.213543] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:27:08.068 [2024-08-14 06:58:35.213678] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:27:08.068 [2024-08-14 06:58:35.213690] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:27:08.068 [2024-08-14 06:58:35.213796] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:08.068 BaseBdev2 00:27:08.068 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:27:08.068 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:27:08.068 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:08.068 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:27:08.068 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:08.068 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:08.068 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:08.327 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:08.586 [ 00:27:08.586 { 00:27:08.586 "name": "BaseBdev2", 00:27:08.586 "aliases": [ 00:27:08.586 "e2e9b246-2c82-4e78-adc7-e79382f625be" 00:27:08.586 ], 00:27:08.586 "product_name": "Malloc disk", 00:27:08.586 "block_size": 4096, 00:27:08.586 "num_blocks": 8192, 00:27:08.586 "uuid": "e2e9b246-2c82-4e78-adc7-e79382f625be", 00:27:08.586 "md_size": 32, 00:27:08.586 "md_interleave": false, 00:27:08.586 "dif_type": 0, 00:27:08.587 "assigned_rate_limits": { 00:27:08.587 "rw_ios_per_sec": 0, 00:27:08.587 "rw_mbytes_per_sec": 0, 00:27:08.587 "r_mbytes_per_sec": 0, 00:27:08.587 "w_mbytes_per_sec": 0 00:27:08.587 }, 00:27:08.587 "claimed": true, 00:27:08.587 "claim_type": "exclusive_write", 00:27:08.587 "zoned": false, 00:27:08.587 "supported_io_types": { 00:27:08.587 "read": true, 00:27:08.587 "write": true, 00:27:08.587 "unmap": true, 00:27:08.587 "flush": true, 00:27:08.587 "reset": true, 00:27:08.587 "nvme_admin": false, 00:27:08.587 "nvme_io": false, 00:27:08.587 "nvme_io_md": false, 00:27:08.587 "write_zeroes": true, 00:27:08.587 "zcopy": true, 00:27:08.587 "get_zone_info": false, 00:27:08.587 "zone_management": false, 00:27:08.587 "zone_append": false, 00:27:08.587 "compare": false, 00:27:08.587 "compare_and_write": false, 00:27:08.587 "abort": true, 00:27:08.587 "seek_hole": false, 00:27:08.587 "seek_data": false, 00:27:08.587 "copy": true, 00:27:08.587 "nvme_iov_md": false 00:27:08.587 }, 00:27:08.587 "memory_domains": [ 00:27:08.587 { 00:27:08.587 "dma_device_id": "system", 00:27:08.587 "dma_device_type": 1 00:27:08.587 }, 00:27:08.587 { 00:27:08.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:08.587 "dma_device_type": 2 00:27:08.587 } 00:27:08.587 ], 00:27:08.587 "driver_specific": {} 00:27:08.587 } 00:27:08.587 ] 00:27:08.587 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:27:08.587 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:08.587 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:08.587 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:08.587 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:08.587 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:08.587 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:08.587 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:08.587 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:08.587 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:08.587 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:08.587 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:08.587 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:08.587 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.587 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:08.845 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:08.845 "name": "Existed_Raid", 00:27:08.845 "uuid": "d874a58a-fdf9-428b-8f37-3d6c68260b0d", 00:27:08.845 "strip_size_kb": 0, 00:27:08.845 "state": "online", 00:27:08.845 "raid_level": "raid1", 00:27:08.845 "superblock": true, 00:27:08.845 "num_base_bdevs": 2, 00:27:08.845 "num_base_bdevs_discovered": 2, 00:27:08.845 "num_base_bdevs_operational": 2, 00:27:08.845 "base_bdevs_list": [ 00:27:08.845 { 00:27:08.845 "name": "BaseBdev1", 00:27:08.845 "uuid": "0ce076e5-6ec7-4c63-a58a-87990a8e5682", 00:27:08.845 "is_configured": true, 00:27:08.845 "data_offset": 256, 00:27:08.845 "data_size": 7936 00:27:08.845 }, 00:27:08.845 { 00:27:08.845 "name": "BaseBdev2", 00:27:08.845 "uuid": "e2e9b246-2c82-4e78-adc7-e79382f625be", 00:27:08.845 "is_configured": true, 00:27:08.845 "data_offset": 256, 00:27:08.845 "data_size": 7936 00:27:08.845 } 00:27:08.846 ] 00:27:08.846 }' 00:27:08.846 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:08.846 06:58:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:09.414 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:27:09.414 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:09.414 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:09.414 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:09.414 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:09.414 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:27:09.414 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:09.414 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:09.414 [2024-08-14 06:58:36.611210] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:09.414 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:09.414 "name": "Existed_Raid", 00:27:09.414 "aliases": [ 00:27:09.414 "d874a58a-fdf9-428b-8f37-3d6c68260b0d" 00:27:09.414 ], 00:27:09.414 "product_name": "Raid Volume", 00:27:09.414 "block_size": 4096, 00:27:09.414 "num_blocks": 7936, 00:27:09.414 "uuid": "d874a58a-fdf9-428b-8f37-3d6c68260b0d", 00:27:09.414 "md_size": 32, 00:27:09.414 "md_interleave": false, 00:27:09.414 "dif_type": 0, 00:27:09.414 "assigned_rate_limits": { 00:27:09.414 "rw_ios_per_sec": 0, 00:27:09.414 "rw_mbytes_per_sec": 0, 00:27:09.414 "r_mbytes_per_sec": 0, 00:27:09.414 "w_mbytes_per_sec": 0 00:27:09.414 }, 00:27:09.414 "claimed": false, 00:27:09.414 "zoned": false, 00:27:09.414 "supported_io_types": { 00:27:09.414 "read": true, 00:27:09.414 "write": true, 00:27:09.414 "unmap": false, 00:27:09.414 "flush": false, 00:27:09.414 "reset": true, 00:27:09.414 "nvme_admin": false, 00:27:09.414 "nvme_io": false, 00:27:09.414 "nvme_io_md": false, 00:27:09.414 "write_zeroes": true, 00:27:09.414 "zcopy": false, 00:27:09.414 "get_zone_info": false, 00:27:09.414 "zone_management": false, 00:27:09.414 "zone_append": false, 00:27:09.414 "compare": false, 00:27:09.414 "compare_and_write": false, 00:27:09.414 "abort": false, 00:27:09.414 "seek_hole": false, 00:27:09.414 "seek_data": false, 00:27:09.414 "copy": false, 00:27:09.414 "nvme_iov_md": false 00:27:09.414 }, 00:27:09.414 "memory_domains": [ 00:27:09.414 { 00:27:09.414 "dma_device_id": "system", 00:27:09.414 "dma_device_type": 1 00:27:09.414 }, 00:27:09.414 { 00:27:09.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.414 "dma_device_type": 2 00:27:09.414 }, 00:27:09.414 { 00:27:09.414 "dma_device_id": "system", 00:27:09.414 "dma_device_type": 1 00:27:09.414 }, 00:27:09.414 { 00:27:09.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.414 "dma_device_type": 2 00:27:09.414 } 00:27:09.414 ], 00:27:09.414 "driver_specific": { 00:27:09.414 "raid": { 00:27:09.414 "uuid": "d874a58a-fdf9-428b-8f37-3d6c68260b0d", 00:27:09.414 "strip_size_kb": 0, 00:27:09.414 "state": "online", 00:27:09.414 "raid_level": "raid1", 00:27:09.414 "superblock": true, 00:27:09.414 "num_base_bdevs": 2, 00:27:09.414 "num_base_bdevs_discovered": 2, 00:27:09.414 "num_base_bdevs_operational": 2, 00:27:09.414 "base_bdevs_list": [ 00:27:09.414 { 00:27:09.414 "name": "BaseBdev1", 00:27:09.414 "uuid": "0ce076e5-6ec7-4c63-a58a-87990a8e5682", 00:27:09.414 "is_configured": true, 00:27:09.414 "data_offset": 256, 00:27:09.414 "data_size": 7936 00:27:09.414 }, 00:27:09.414 { 00:27:09.414 "name": "BaseBdev2", 00:27:09.414 "uuid": "e2e9b246-2c82-4e78-adc7-e79382f625be", 00:27:09.414 "is_configured": true, 00:27:09.414 "data_offset": 256, 00:27:09.414 "data_size": 7936 00:27:09.414 } 00:27:09.414 ] 00:27:09.415 } 00:27:09.415 } 00:27:09.415 }' 00:27:09.415 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:09.674 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:27:09.674 BaseBdev2' 00:27:09.674 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:09.674 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:09.674 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:09.674 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:09.674 "name": "BaseBdev1", 00:27:09.674 "aliases": [ 00:27:09.674 "0ce076e5-6ec7-4c63-a58a-87990a8e5682" 00:27:09.674 ], 00:27:09.674 "product_name": "Malloc disk", 00:27:09.674 "block_size": 4096, 00:27:09.674 "num_blocks": 8192, 00:27:09.674 "uuid": "0ce076e5-6ec7-4c63-a58a-87990a8e5682", 00:27:09.674 "md_size": 32, 00:27:09.674 "md_interleave": false, 00:27:09.674 "dif_type": 0, 00:27:09.674 "assigned_rate_limits": { 00:27:09.674 "rw_ios_per_sec": 0, 00:27:09.674 "rw_mbytes_per_sec": 0, 00:27:09.674 "r_mbytes_per_sec": 0, 00:27:09.674 "w_mbytes_per_sec": 0 00:27:09.674 }, 00:27:09.674 "claimed": true, 00:27:09.674 "claim_type": "exclusive_write", 00:27:09.674 "zoned": false, 00:27:09.674 "supported_io_types": { 00:27:09.674 "read": true, 00:27:09.674 "write": true, 00:27:09.674 "unmap": true, 00:27:09.674 "flush": true, 00:27:09.674 "reset": true, 00:27:09.674 "nvme_admin": false, 00:27:09.674 "nvme_io": false, 00:27:09.674 "nvme_io_md": false, 00:27:09.674 "write_zeroes": true, 00:27:09.674 "zcopy": true, 00:27:09.674 "get_zone_info": false, 00:27:09.674 "zone_management": false, 00:27:09.674 "zone_append": false, 00:27:09.674 "compare": false, 00:27:09.674 "compare_and_write": false, 00:27:09.674 "abort": true, 00:27:09.674 "seek_hole": false, 00:27:09.674 "seek_data": false, 00:27:09.674 "copy": true, 00:27:09.674 "nvme_iov_md": false 00:27:09.674 }, 00:27:09.674 "memory_domains": [ 00:27:09.674 { 00:27:09.674 "dma_device_id": "system", 00:27:09.674 "dma_device_type": 1 00:27:09.674 }, 00:27:09.674 { 00:27:09.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.674 "dma_device_type": 2 00:27:09.674 } 00:27:09.674 ], 00:27:09.674 "driver_specific": {} 00:27:09.674 }' 00:27:09.674 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:09.933 06:58:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:09.933 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:09.933 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:09.933 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:09.933 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:09.933 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:09.933 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:09.933 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:09.933 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:09.933 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:10.192 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:10.192 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:10.192 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:10.192 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:10.192 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:10.192 "name": "BaseBdev2", 00:27:10.192 "aliases": [ 00:27:10.192 "e2e9b246-2c82-4e78-adc7-e79382f625be" 00:27:10.192 ], 00:27:10.192 "product_name": "Malloc disk", 00:27:10.192 "block_size": 4096, 00:27:10.192 "num_blocks": 8192, 00:27:10.192 "uuid": "e2e9b246-2c82-4e78-adc7-e79382f625be", 00:27:10.192 "md_size": 32, 00:27:10.192 "md_interleave": false, 00:27:10.192 "dif_type": 0, 00:27:10.192 "assigned_rate_limits": { 00:27:10.192 "rw_ios_per_sec": 0, 00:27:10.192 "rw_mbytes_per_sec": 0, 00:27:10.192 "r_mbytes_per_sec": 0, 00:27:10.192 "w_mbytes_per_sec": 0 00:27:10.192 }, 00:27:10.192 "claimed": true, 00:27:10.192 "claim_type": "exclusive_write", 00:27:10.192 "zoned": false, 00:27:10.192 "supported_io_types": { 00:27:10.192 "read": true, 00:27:10.192 "write": true, 00:27:10.192 "unmap": true, 00:27:10.192 "flush": true, 00:27:10.192 "reset": true, 00:27:10.192 "nvme_admin": false, 00:27:10.192 "nvme_io": false, 00:27:10.192 "nvme_io_md": false, 00:27:10.192 "write_zeroes": true, 00:27:10.192 "zcopy": true, 00:27:10.192 "get_zone_info": false, 00:27:10.192 "zone_management": false, 00:27:10.192 "zone_append": false, 00:27:10.192 "compare": false, 00:27:10.192 "compare_and_write": false, 00:27:10.192 "abort": true, 00:27:10.192 "seek_hole": false, 00:27:10.192 "seek_data": false, 00:27:10.192 "copy": true, 00:27:10.192 "nvme_iov_md": false 00:27:10.192 }, 00:27:10.192 "memory_domains": [ 00:27:10.192 { 00:27:10.192 "dma_device_id": "system", 00:27:10.192 "dma_device_type": 1 00:27:10.192 }, 00:27:10.192 { 00:27:10.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:10.192 "dma_device_type": 2 00:27:10.192 } 00:27:10.192 ], 00:27:10.192 "driver_specific": {} 00:27:10.192 }' 00:27:10.192 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:10.452 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:10.452 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:10.452 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:10.452 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:10.452 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:10.452 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:10.452 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:10.452 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:10.452 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:10.711 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:10.711 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:10.711 06:58:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:10.971 [2024-08-14 06:58:37.996700] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.971 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:11.231 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:11.231 "name": "Existed_Raid", 00:27:11.231 "uuid": "d874a58a-fdf9-428b-8f37-3d6c68260b0d", 00:27:11.231 "strip_size_kb": 0, 00:27:11.231 "state": "online", 00:27:11.231 "raid_level": "raid1", 00:27:11.231 "superblock": true, 00:27:11.231 "num_base_bdevs": 2, 00:27:11.231 "num_base_bdevs_discovered": 1, 00:27:11.231 "num_base_bdevs_operational": 1, 00:27:11.231 "base_bdevs_list": [ 00:27:11.231 { 00:27:11.231 "name": null, 00:27:11.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.231 "is_configured": false, 00:27:11.231 "data_offset": 256, 00:27:11.231 "data_size": 7936 00:27:11.231 }, 00:27:11.231 { 00:27:11.231 "name": "BaseBdev2", 00:27:11.231 "uuid": "e2e9b246-2c82-4e78-adc7-e79382f625be", 00:27:11.231 "is_configured": true, 00:27:11.231 "data_offset": 256, 00:27:11.231 "data_size": 7936 00:27:11.231 } 00:27:11.231 ] 00:27:11.231 }' 00:27:11.231 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:11.231 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:11.800 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:27:11.800 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:11.800 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.800 06:58:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:12.071 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:12.071 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:12.071 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:12.351 [2024-08-14 06:58:39.319420] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:12.351 [2024-08-14 06:58:39.319562] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:12.351 [2024-08-14 06:58:39.332487] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:12.351 [2024-08-14 06:58:39.332555] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:12.351 [2024-08-14 06:58:39.332567] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:27:12.351 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:12.351 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:12.351 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.351 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:12.610 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:12.610 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:12.610 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:27:12.610 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 107859 00:27:12.610 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@946 -- # '[' -z 107859 ']' 00:27:12.610 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # kill -0 107859 00:27:12.610 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # uname 00:27:12.610 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:12.610 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107859 00:27:12.610 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:12.610 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:12.610 killing process with pid 107859 00:27:12.610 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107859' 00:27:12.610 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@965 -- # kill 107859 00:27:12.610 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # wait 107859 00:27:12.610 [2024-08-14 06:58:39.691629] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:12.610 [2024-08-14 06:58:39.692738] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:12.870 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:27:12.870 00:27:12.870 real 0m9.809s 00:27:12.870 user 0m17.614s 00:27:12.870 sys 0m1.530s 00:27:12.870 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:12.870 06:58:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:12.870 ************************************ 00:27:12.870 END TEST raid_state_function_test_sb_md_separate 00:27:12.870 ************************************ 00:27:12.870 06:58:39 bdev_raid -- bdev/bdev_raid.sh@984 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:27:12.870 06:58:39 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:27:12.870 06:58:39 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:12.870 06:58:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:12.870 ************************************ 00:27:12.870 START TEST raid_superblock_test_md_separate 00:27:12.870 ************************************ 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@414 -- # local strip_size 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@427 -- # raid_pid=108200 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@428 -- # waitforlisten 108200 /var/tmp/spdk-raid.sock 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@827 -- # '[' -z 108200 ']' 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:12.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:12.870 06:58:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:12.870 [2024-08-14 06:58:40.089629] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:27:12.870 [2024-08-14 06:58:40.089850] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108200 ] 00:27:13.129 [2024-08-14 06:58:40.238846] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.129 [2024-08-14 06:58:40.290554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.129 [2024-08-14 06:58:40.334513] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:13.129 [2024-08-14 06:58:40.334545] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:14.066 06:58:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:14.066 06:58:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # return 0 00:27:14.066 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:27:14.066 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:27:14.066 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:27:14.066 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:27:14.066 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:14.066 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:14.066 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:27:14.066 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:14.066 06:58:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:27:14.066 malloc1 00:27:14.066 06:58:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:14.325 [2024-08-14 06:58:41.404669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:14.325 [2024-08-14 06:58:41.404761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:14.325 [2024-08-14 06:58:41.404792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:27:14.325 [2024-08-14 06:58:41.404801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:14.325 [2024-08-14 06:58:41.406893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:14.325 [2024-08-14 06:58:41.406941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:14.325 pt1 00:27:14.325 06:58:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:27:14.325 06:58:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:27:14.325 06:58:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:27:14.325 06:58:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:27:14.325 06:58:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:14.325 06:58:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:14.325 06:58:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:27:14.325 06:58:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:14.325 06:58:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:27:14.585 malloc2 00:27:14.585 06:58:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:14.846 [2024-08-14 06:58:41.840972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:14.846 [2024-08-14 06:58:41.841093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:14.846 [2024-08-14 06:58:41.841127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:14.846 [2024-08-14 06:58:41.841136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:14.846 [2024-08-14 06:58:41.843370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:14.846 [2024-08-14 06:58:41.843466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:14.846 pt2 00:27:14.846 06:58:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:27:14.846 06:58:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:27:14.846 06:58:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:27:14.846 [2024-08-14 06:58:42.060646] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:14.846 [2024-08-14 06:58:42.062657] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:14.846 [2024-08-14 06:58:42.062856] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:27:14.846 [2024-08-14 06:58:42.062874] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:14.846 [2024-08-14 06:58:42.062973] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:27:14.846 [2024-08-14 06:58:42.063087] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:27:14.846 [2024-08-14 06:58:42.063097] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:27:14.846 [2024-08-14 06:58:42.063245] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:14.846 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:14.846 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:14.846 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:14.846 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:14.846 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:14.846 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:14.846 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:14.846 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:14.846 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:14.846 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:14.846 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.846 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.106 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:15.106 "name": "raid_bdev1", 00:27:15.106 "uuid": "df994e0f-e713-47b9-99e0-bc2ca62710eb", 00:27:15.106 "strip_size_kb": 0, 00:27:15.106 "state": "online", 00:27:15.106 "raid_level": "raid1", 00:27:15.106 "superblock": true, 00:27:15.106 "num_base_bdevs": 2, 00:27:15.106 "num_base_bdevs_discovered": 2, 00:27:15.106 "num_base_bdevs_operational": 2, 00:27:15.106 "base_bdevs_list": [ 00:27:15.106 { 00:27:15.106 "name": "pt1", 00:27:15.106 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:15.106 "is_configured": true, 00:27:15.106 "data_offset": 256, 00:27:15.106 "data_size": 7936 00:27:15.106 }, 00:27:15.106 { 00:27:15.106 "name": "pt2", 00:27:15.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:15.106 "is_configured": true, 00:27:15.106 "data_offset": 256, 00:27:15.106 "data_size": 7936 00:27:15.106 } 00:27:15.106 ] 00:27:15.106 }' 00:27:15.106 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:15.106 06:58:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:15.674 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:27:15.674 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:15.674 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:15.674 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:15.674 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:15.674 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:27:15.674 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:15.674 06:58:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:15.934 [2024-08-14 06:58:43.087237] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:15.934 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:15.934 "name": "raid_bdev1", 00:27:15.934 "aliases": [ 00:27:15.934 "df994e0f-e713-47b9-99e0-bc2ca62710eb" 00:27:15.934 ], 00:27:15.934 "product_name": "Raid Volume", 00:27:15.934 "block_size": 4096, 00:27:15.934 "num_blocks": 7936, 00:27:15.934 "uuid": "df994e0f-e713-47b9-99e0-bc2ca62710eb", 00:27:15.934 "md_size": 32, 00:27:15.934 "md_interleave": false, 00:27:15.934 "dif_type": 0, 00:27:15.934 "assigned_rate_limits": { 00:27:15.934 "rw_ios_per_sec": 0, 00:27:15.934 "rw_mbytes_per_sec": 0, 00:27:15.934 "r_mbytes_per_sec": 0, 00:27:15.934 "w_mbytes_per_sec": 0 00:27:15.934 }, 00:27:15.934 "claimed": false, 00:27:15.934 "zoned": false, 00:27:15.934 "supported_io_types": { 00:27:15.934 "read": true, 00:27:15.934 "write": true, 00:27:15.934 "unmap": false, 00:27:15.934 "flush": false, 00:27:15.934 "reset": true, 00:27:15.934 "nvme_admin": false, 00:27:15.934 "nvme_io": false, 00:27:15.934 "nvme_io_md": false, 00:27:15.934 "write_zeroes": true, 00:27:15.934 "zcopy": false, 00:27:15.934 "get_zone_info": false, 00:27:15.934 "zone_management": false, 00:27:15.934 "zone_append": false, 00:27:15.934 "compare": false, 00:27:15.934 "compare_and_write": false, 00:27:15.934 "abort": false, 00:27:15.934 "seek_hole": false, 00:27:15.934 "seek_data": false, 00:27:15.934 "copy": false, 00:27:15.934 "nvme_iov_md": false 00:27:15.934 }, 00:27:15.934 "memory_domains": [ 00:27:15.934 { 00:27:15.934 "dma_device_id": "system", 00:27:15.934 "dma_device_type": 1 00:27:15.934 }, 00:27:15.934 { 00:27:15.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.934 "dma_device_type": 2 00:27:15.934 }, 00:27:15.934 { 00:27:15.934 "dma_device_id": "system", 00:27:15.934 "dma_device_type": 1 00:27:15.934 }, 00:27:15.934 { 00:27:15.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.934 "dma_device_type": 2 00:27:15.934 } 00:27:15.934 ], 00:27:15.934 "driver_specific": { 00:27:15.934 "raid": { 00:27:15.934 "uuid": "df994e0f-e713-47b9-99e0-bc2ca62710eb", 00:27:15.934 "strip_size_kb": 0, 00:27:15.934 "state": "online", 00:27:15.934 "raid_level": "raid1", 00:27:15.934 "superblock": true, 00:27:15.934 "num_base_bdevs": 2, 00:27:15.934 "num_base_bdevs_discovered": 2, 00:27:15.934 "num_base_bdevs_operational": 2, 00:27:15.934 "base_bdevs_list": [ 00:27:15.934 { 00:27:15.934 "name": "pt1", 00:27:15.934 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:15.934 "is_configured": true, 00:27:15.934 "data_offset": 256, 00:27:15.934 "data_size": 7936 00:27:15.934 }, 00:27:15.934 { 00:27:15.934 "name": "pt2", 00:27:15.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:15.934 "is_configured": true, 00:27:15.934 "data_offset": 256, 00:27:15.934 "data_size": 7936 00:27:15.934 } 00:27:15.934 ] 00:27:15.934 } 00:27:15.934 } 00:27:15.934 }' 00:27:15.934 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:15.934 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:15.934 pt2' 00:27:15.934 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:15.934 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:15.934 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:16.193 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:16.193 "name": "pt1", 00:27:16.193 "aliases": [ 00:27:16.193 "00000000-0000-0000-0000-000000000001" 00:27:16.193 ], 00:27:16.193 "product_name": "passthru", 00:27:16.193 "block_size": 4096, 00:27:16.193 "num_blocks": 8192, 00:27:16.193 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:16.193 "md_size": 32, 00:27:16.193 "md_interleave": false, 00:27:16.193 "dif_type": 0, 00:27:16.193 "assigned_rate_limits": { 00:27:16.193 "rw_ios_per_sec": 0, 00:27:16.193 "rw_mbytes_per_sec": 0, 00:27:16.193 "r_mbytes_per_sec": 0, 00:27:16.194 "w_mbytes_per_sec": 0 00:27:16.194 }, 00:27:16.194 "claimed": true, 00:27:16.194 "claim_type": "exclusive_write", 00:27:16.194 "zoned": false, 00:27:16.194 "supported_io_types": { 00:27:16.194 "read": true, 00:27:16.194 "write": true, 00:27:16.194 "unmap": true, 00:27:16.194 "flush": true, 00:27:16.194 "reset": true, 00:27:16.194 "nvme_admin": false, 00:27:16.194 "nvme_io": false, 00:27:16.194 "nvme_io_md": false, 00:27:16.194 "write_zeroes": true, 00:27:16.194 "zcopy": true, 00:27:16.194 "get_zone_info": false, 00:27:16.194 "zone_management": false, 00:27:16.194 "zone_append": false, 00:27:16.194 "compare": false, 00:27:16.194 "compare_and_write": false, 00:27:16.194 "abort": true, 00:27:16.194 "seek_hole": false, 00:27:16.194 "seek_data": false, 00:27:16.194 "copy": true, 00:27:16.194 "nvme_iov_md": false 00:27:16.194 }, 00:27:16.194 "memory_domains": [ 00:27:16.194 { 00:27:16.194 "dma_device_id": "system", 00:27:16.194 "dma_device_type": 1 00:27:16.194 }, 00:27:16.194 { 00:27:16.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:16.194 "dma_device_type": 2 00:27:16.194 } 00:27:16.194 ], 00:27:16.194 "driver_specific": { 00:27:16.194 "passthru": { 00:27:16.194 "name": "pt1", 00:27:16.194 "base_bdev_name": "malloc1" 00:27:16.194 } 00:27:16.194 } 00:27:16.194 }' 00:27:16.194 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:16.194 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:16.453 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:16.453 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:16.453 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:16.453 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:16.453 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:16.453 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:16.453 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:16.453 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:16.453 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:16.453 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:16.453 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:16.453 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:16.453 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:16.711 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:16.711 "name": "pt2", 00:27:16.711 "aliases": [ 00:27:16.711 "00000000-0000-0000-0000-000000000002" 00:27:16.711 ], 00:27:16.711 "product_name": "passthru", 00:27:16.711 "block_size": 4096, 00:27:16.711 "num_blocks": 8192, 00:27:16.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:16.711 "md_size": 32, 00:27:16.712 "md_interleave": false, 00:27:16.712 "dif_type": 0, 00:27:16.712 "assigned_rate_limits": { 00:27:16.712 "rw_ios_per_sec": 0, 00:27:16.712 "rw_mbytes_per_sec": 0, 00:27:16.712 "r_mbytes_per_sec": 0, 00:27:16.712 "w_mbytes_per_sec": 0 00:27:16.712 }, 00:27:16.712 "claimed": true, 00:27:16.712 "claim_type": "exclusive_write", 00:27:16.712 "zoned": false, 00:27:16.712 "supported_io_types": { 00:27:16.712 "read": true, 00:27:16.712 "write": true, 00:27:16.712 "unmap": true, 00:27:16.712 "flush": true, 00:27:16.712 "reset": true, 00:27:16.712 "nvme_admin": false, 00:27:16.712 "nvme_io": false, 00:27:16.712 "nvme_io_md": false, 00:27:16.712 "write_zeroes": true, 00:27:16.712 "zcopy": true, 00:27:16.712 "get_zone_info": false, 00:27:16.712 "zone_management": false, 00:27:16.712 "zone_append": false, 00:27:16.712 "compare": false, 00:27:16.712 "compare_and_write": false, 00:27:16.712 "abort": true, 00:27:16.712 "seek_hole": false, 00:27:16.712 "seek_data": false, 00:27:16.712 "copy": true, 00:27:16.712 "nvme_iov_md": false 00:27:16.712 }, 00:27:16.712 "memory_domains": [ 00:27:16.712 { 00:27:16.712 "dma_device_id": "system", 00:27:16.712 "dma_device_type": 1 00:27:16.712 }, 00:27:16.712 { 00:27:16.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:16.712 "dma_device_type": 2 00:27:16.712 } 00:27:16.712 ], 00:27:16.712 "driver_specific": { 00:27:16.712 "passthru": { 00:27:16.712 "name": "pt2", 00:27:16.712 "base_bdev_name": "malloc2" 00:27:16.712 } 00:27:16.712 } 00:27:16.712 }' 00:27:16.712 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:16.712 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:16.971 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:16.971 06:58:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:16.971 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:16.971 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:16.971 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:16.971 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:16.971 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:16.971 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:16.971 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:17.231 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:17.231 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:27:17.231 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:17.231 [2024-08-14 06:58:44.436896] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:17.231 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=df994e0f-e713-47b9-99e0-bc2ca62710eb 00:27:17.231 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' -z df994e0f-e713-47b9-99e0-bc2ca62710eb ']' 00:27:17.231 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:17.490 [2024-08-14 06:58:44.652332] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:17.490 [2024-08-14 06:58:44.652369] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:17.490 [2024-08-14 06:58:44.652463] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:17.490 [2024-08-14 06:58:44.652529] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:17.490 [2024-08-14 06:58:44.652541] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:27:17.490 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.490 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:27:17.750 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:27:17.750 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:27:17.750 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:27:17.750 06:58:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:18.009 06:58:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:27:18.009 06:58:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:18.267 06:58:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:18.267 06:58:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:18.526 06:58:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:27:18.526 06:58:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:18.526 06:58:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@646 -- # local es=0 00:27:18.526 06:58:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:18.526 06:58:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:18.526 06:58:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:27:18.526 06:58:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:18.526 06:58:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:27:18.526 06:58:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:18.526 06:58:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:27:18.526 06:58:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:18.526 06:58:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:18.526 06:58:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:18.526 [2024-08-14 06:58:45.730484] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:18.527 [2024-08-14 06:58:45.732487] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:18.527 [2024-08-14 06:58:45.732556] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:18.527 [2024-08-14 06:58:45.732613] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:18.527 [2024-08-14 06:58:45.732628] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:18.527 [2024-08-14 06:58:45.732639] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:27:18.527 request: 00:27:18.527 { 00:27:18.527 "name": "raid_bdev1", 00:27:18.527 "raid_level": "raid1", 00:27:18.527 "base_bdevs": [ 00:27:18.527 "malloc1", 00:27:18.527 "malloc2" 00:27:18.527 ], 00:27:18.527 "superblock": false, 00:27:18.527 "method": "bdev_raid_create", 00:27:18.527 "req_id": 1 00:27:18.527 } 00:27:18.527 Got JSON-RPC error response 00:27:18.527 response: 00:27:18.527 { 00:27:18.527 "code": -17, 00:27:18.527 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:18.527 } 00:27:18.527 06:58:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@649 -- # es=1 00:27:18.527 06:58:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:27:18.527 06:58:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:27:18.527 06:58:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:27:18.527 06:58:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:27:18.527 06:58:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.786 06:58:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:27:18.786 06:58:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:27:18.786 06:58:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:19.045 [2024-08-14 06:58:46.157744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:19.045 [2024-08-14 06:58:46.157838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:19.045 [2024-08-14 06:58:46.157860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:19.045 [2024-08-14 06:58:46.157873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:19.045 [2024-08-14 06:58:46.159846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:19.045 [2024-08-14 06:58:46.159952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:19.045 [2024-08-14 06:58:46.160038] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:19.045 [2024-08-14 06:58:46.160101] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:19.045 pt1 00:27:19.045 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:19.045 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:19.045 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:19.045 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:19.045 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:19.045 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:19.045 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:19.045 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:19.045 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:19.045 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:19.045 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.045 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.305 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:19.305 "name": "raid_bdev1", 00:27:19.305 "uuid": "df994e0f-e713-47b9-99e0-bc2ca62710eb", 00:27:19.305 "strip_size_kb": 0, 00:27:19.305 "state": "configuring", 00:27:19.305 "raid_level": "raid1", 00:27:19.305 "superblock": true, 00:27:19.305 "num_base_bdevs": 2, 00:27:19.305 "num_base_bdevs_discovered": 1, 00:27:19.305 "num_base_bdevs_operational": 2, 00:27:19.305 "base_bdevs_list": [ 00:27:19.305 { 00:27:19.305 "name": "pt1", 00:27:19.305 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:19.305 "is_configured": true, 00:27:19.305 "data_offset": 256, 00:27:19.305 "data_size": 7936 00:27:19.305 }, 00:27:19.305 { 00:27:19.305 "name": null, 00:27:19.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:19.305 "is_configured": false, 00:27:19.305 "data_offset": 256, 00:27:19.305 "data_size": 7936 00:27:19.305 } 00:27:19.305 ] 00:27:19.305 }' 00:27:19.305 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:19.305 06:58:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:19.874 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:27:19.874 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:27:19.874 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:27:19.874 06:58:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:20.134 [2024-08-14 06:58:47.152073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:20.134 [2024-08-14 06:58:47.152177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.134 [2024-08-14 06:58:47.152216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:20.134 [2024-08-14 06:58:47.152227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.134 [2024-08-14 06:58:47.152442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.134 [2024-08-14 06:58:47.152465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:20.134 [2024-08-14 06:58:47.152522] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:20.134 [2024-08-14 06:58:47.152551] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:20.134 [2024-08-14 06:58:47.152642] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:27:20.134 [2024-08-14 06:58:47.152653] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:20.134 [2024-08-14 06:58:47.152734] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:27:20.134 [2024-08-14 06:58:47.152822] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:27:20.134 [2024-08-14 06:58:47.152829] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:27:20.134 [2024-08-14 06:58:47.152898] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:20.134 pt2 00:27:20.135 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:27:20.135 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:27:20.135 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:20.135 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:20.135 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:20.135 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:20.135 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:20.135 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:20.135 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:20.135 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:20.135 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:20.135 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:20.135 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.135 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.395 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:20.395 "name": "raid_bdev1", 00:27:20.395 "uuid": "df994e0f-e713-47b9-99e0-bc2ca62710eb", 00:27:20.395 "strip_size_kb": 0, 00:27:20.395 "state": "online", 00:27:20.395 "raid_level": "raid1", 00:27:20.395 "superblock": true, 00:27:20.395 "num_base_bdevs": 2, 00:27:20.395 "num_base_bdevs_discovered": 2, 00:27:20.395 "num_base_bdevs_operational": 2, 00:27:20.395 "base_bdevs_list": [ 00:27:20.395 { 00:27:20.395 "name": "pt1", 00:27:20.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:20.395 "is_configured": true, 00:27:20.395 "data_offset": 256, 00:27:20.395 "data_size": 7936 00:27:20.395 }, 00:27:20.395 { 00:27:20.395 "name": "pt2", 00:27:20.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:20.395 "is_configured": true, 00:27:20.395 "data_offset": 256, 00:27:20.395 "data_size": 7936 00:27:20.395 } 00:27:20.395 ] 00:27:20.395 }' 00:27:20.395 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:20.395 06:58:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:20.964 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:27:20.964 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:20.964 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:20.964 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:20.964 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:20.964 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:27:20.964 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:20.964 06:58:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:20.964 [2024-08-14 06:58:48.146735] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:20.964 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:20.964 "name": "raid_bdev1", 00:27:20.964 "aliases": [ 00:27:20.964 "df994e0f-e713-47b9-99e0-bc2ca62710eb" 00:27:20.964 ], 00:27:20.964 "product_name": "Raid Volume", 00:27:20.964 "block_size": 4096, 00:27:20.964 "num_blocks": 7936, 00:27:20.964 "uuid": "df994e0f-e713-47b9-99e0-bc2ca62710eb", 00:27:20.964 "md_size": 32, 00:27:20.964 "md_interleave": false, 00:27:20.964 "dif_type": 0, 00:27:20.964 "assigned_rate_limits": { 00:27:20.964 "rw_ios_per_sec": 0, 00:27:20.964 "rw_mbytes_per_sec": 0, 00:27:20.964 "r_mbytes_per_sec": 0, 00:27:20.964 "w_mbytes_per_sec": 0 00:27:20.964 }, 00:27:20.964 "claimed": false, 00:27:20.964 "zoned": false, 00:27:20.964 "supported_io_types": { 00:27:20.964 "read": true, 00:27:20.964 "write": true, 00:27:20.964 "unmap": false, 00:27:20.964 "flush": false, 00:27:20.964 "reset": true, 00:27:20.964 "nvme_admin": false, 00:27:20.964 "nvme_io": false, 00:27:20.964 "nvme_io_md": false, 00:27:20.964 "write_zeroes": true, 00:27:20.964 "zcopy": false, 00:27:20.964 "get_zone_info": false, 00:27:20.964 "zone_management": false, 00:27:20.964 "zone_append": false, 00:27:20.964 "compare": false, 00:27:20.964 "compare_and_write": false, 00:27:20.965 "abort": false, 00:27:20.965 "seek_hole": false, 00:27:20.965 "seek_data": false, 00:27:20.965 "copy": false, 00:27:20.965 "nvme_iov_md": false 00:27:20.965 }, 00:27:20.965 "memory_domains": [ 00:27:20.965 { 00:27:20.965 "dma_device_id": "system", 00:27:20.965 "dma_device_type": 1 00:27:20.965 }, 00:27:20.965 { 00:27:20.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:20.965 "dma_device_type": 2 00:27:20.965 }, 00:27:20.965 { 00:27:20.965 "dma_device_id": "system", 00:27:20.965 "dma_device_type": 1 00:27:20.965 }, 00:27:20.965 { 00:27:20.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:20.965 "dma_device_type": 2 00:27:20.965 } 00:27:20.965 ], 00:27:20.965 "driver_specific": { 00:27:20.965 "raid": { 00:27:20.965 "uuid": "df994e0f-e713-47b9-99e0-bc2ca62710eb", 00:27:20.965 "strip_size_kb": 0, 00:27:20.965 "state": "online", 00:27:20.965 "raid_level": "raid1", 00:27:20.965 "superblock": true, 00:27:20.965 "num_base_bdevs": 2, 00:27:20.965 "num_base_bdevs_discovered": 2, 00:27:20.965 "num_base_bdevs_operational": 2, 00:27:20.965 "base_bdevs_list": [ 00:27:20.965 { 00:27:20.965 "name": "pt1", 00:27:20.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:20.965 "is_configured": true, 00:27:20.965 "data_offset": 256, 00:27:20.965 "data_size": 7936 00:27:20.965 }, 00:27:20.965 { 00:27:20.965 "name": "pt2", 00:27:20.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:20.965 "is_configured": true, 00:27:20.965 "data_offset": 256, 00:27:20.965 "data_size": 7936 00:27:20.965 } 00:27:20.965 ] 00:27:20.965 } 00:27:20.965 } 00:27:20.965 }' 00:27:20.965 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:20.965 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:20.965 pt2' 00:27:20.965 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:21.225 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:21.225 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:21.225 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:21.225 "name": "pt1", 00:27:21.225 "aliases": [ 00:27:21.225 "00000000-0000-0000-0000-000000000001" 00:27:21.225 ], 00:27:21.225 "product_name": "passthru", 00:27:21.225 "block_size": 4096, 00:27:21.225 "num_blocks": 8192, 00:27:21.225 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:21.225 "md_size": 32, 00:27:21.225 "md_interleave": false, 00:27:21.225 "dif_type": 0, 00:27:21.225 "assigned_rate_limits": { 00:27:21.225 "rw_ios_per_sec": 0, 00:27:21.225 "rw_mbytes_per_sec": 0, 00:27:21.225 "r_mbytes_per_sec": 0, 00:27:21.225 "w_mbytes_per_sec": 0 00:27:21.225 }, 00:27:21.225 "claimed": true, 00:27:21.225 "claim_type": "exclusive_write", 00:27:21.225 "zoned": false, 00:27:21.225 "supported_io_types": { 00:27:21.225 "read": true, 00:27:21.225 "write": true, 00:27:21.225 "unmap": true, 00:27:21.225 "flush": true, 00:27:21.225 "reset": true, 00:27:21.225 "nvme_admin": false, 00:27:21.225 "nvme_io": false, 00:27:21.225 "nvme_io_md": false, 00:27:21.225 "write_zeroes": true, 00:27:21.225 "zcopy": true, 00:27:21.225 "get_zone_info": false, 00:27:21.225 "zone_management": false, 00:27:21.225 "zone_append": false, 00:27:21.225 "compare": false, 00:27:21.225 "compare_and_write": false, 00:27:21.225 "abort": true, 00:27:21.225 "seek_hole": false, 00:27:21.225 "seek_data": false, 00:27:21.225 "copy": true, 00:27:21.225 "nvme_iov_md": false 00:27:21.225 }, 00:27:21.225 "memory_domains": [ 00:27:21.225 { 00:27:21.225 "dma_device_id": "system", 00:27:21.225 "dma_device_type": 1 00:27:21.225 }, 00:27:21.225 { 00:27:21.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:21.225 "dma_device_type": 2 00:27:21.225 } 00:27:21.225 ], 00:27:21.225 "driver_specific": { 00:27:21.225 "passthru": { 00:27:21.225 "name": "pt1", 00:27:21.225 "base_bdev_name": "malloc1" 00:27:21.225 } 00:27:21.225 } 00:27:21.225 }' 00:27:21.225 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:21.225 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:21.485 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:21.485 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:21.485 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:21.485 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:21.485 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:21.485 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:21.485 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:21.485 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:21.485 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:21.745 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:21.745 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:21.745 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:21.745 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:21.745 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:21.745 "name": "pt2", 00:27:21.745 "aliases": [ 00:27:21.745 "00000000-0000-0000-0000-000000000002" 00:27:21.745 ], 00:27:21.745 "product_name": "passthru", 00:27:21.745 "block_size": 4096, 00:27:21.745 "num_blocks": 8192, 00:27:21.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:21.745 "md_size": 32, 00:27:21.745 "md_interleave": false, 00:27:21.745 "dif_type": 0, 00:27:21.745 "assigned_rate_limits": { 00:27:21.745 "rw_ios_per_sec": 0, 00:27:21.745 "rw_mbytes_per_sec": 0, 00:27:21.745 "r_mbytes_per_sec": 0, 00:27:21.745 "w_mbytes_per_sec": 0 00:27:21.745 }, 00:27:21.745 "claimed": true, 00:27:21.745 "claim_type": "exclusive_write", 00:27:21.745 "zoned": false, 00:27:21.745 "supported_io_types": { 00:27:21.745 "read": true, 00:27:21.745 "write": true, 00:27:21.745 "unmap": true, 00:27:21.745 "flush": true, 00:27:21.745 "reset": true, 00:27:21.745 "nvme_admin": false, 00:27:21.745 "nvme_io": false, 00:27:21.745 "nvme_io_md": false, 00:27:21.745 "write_zeroes": true, 00:27:21.745 "zcopy": true, 00:27:21.745 "get_zone_info": false, 00:27:21.745 "zone_management": false, 00:27:21.745 "zone_append": false, 00:27:21.745 "compare": false, 00:27:21.745 "compare_and_write": false, 00:27:21.745 "abort": true, 00:27:21.745 "seek_hole": false, 00:27:21.745 "seek_data": false, 00:27:21.745 "copy": true, 00:27:21.745 "nvme_iov_md": false 00:27:21.745 }, 00:27:21.745 "memory_domains": [ 00:27:21.745 { 00:27:21.745 "dma_device_id": "system", 00:27:21.745 "dma_device_type": 1 00:27:21.745 }, 00:27:21.745 { 00:27:21.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:21.745 "dma_device_type": 2 00:27:21.745 } 00:27:21.745 ], 00:27:21.745 "driver_specific": { 00:27:21.745 "passthru": { 00:27:21.745 "name": "pt2", 00:27:21.745 "base_bdev_name": "malloc2" 00:27:21.745 } 00:27:21.745 } 00:27:21.745 }' 00:27:21.745 06:58:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:22.005 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:22.005 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:22.005 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:22.005 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:22.005 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:22.005 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:22.005 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:22.005 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:22.005 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:22.264 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:22.264 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:22.264 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:27:22.264 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:22.522 [2024-08-14 06:58:49.544392] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:22.522 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # '[' df994e0f-e713-47b9-99e0-bc2ca62710eb '!=' df994e0f-e713-47b9-99e0-bc2ca62710eb ']' 00:27:22.522 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:27:22.522 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:22.522 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:27:22.522 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:22.522 [2024-08-14 06:58:49.759771] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:22.781 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:22.781 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:22.781 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:22.781 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:22.781 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:22.781 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:22.781 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:22.781 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:22.781 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:22.781 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:22.781 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.781 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.781 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:22.781 "name": "raid_bdev1", 00:27:22.781 "uuid": "df994e0f-e713-47b9-99e0-bc2ca62710eb", 00:27:22.781 "strip_size_kb": 0, 00:27:22.781 "state": "online", 00:27:22.781 "raid_level": "raid1", 00:27:22.781 "superblock": true, 00:27:22.781 "num_base_bdevs": 2, 00:27:22.781 "num_base_bdevs_discovered": 1, 00:27:22.782 "num_base_bdevs_operational": 1, 00:27:22.782 "base_bdevs_list": [ 00:27:22.782 { 00:27:22.782 "name": null, 00:27:22.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.782 "is_configured": false, 00:27:22.782 "data_offset": 256, 00:27:22.782 "data_size": 7936 00:27:22.782 }, 00:27:22.782 { 00:27:22.782 "name": "pt2", 00:27:22.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:22.782 "is_configured": true, 00:27:22.782 "data_offset": 256, 00:27:22.782 "data_size": 7936 00:27:22.782 } 00:27:22.782 ] 00:27:22.782 }' 00:27:22.782 06:58:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:22.782 06:58:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:23.350 06:58:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:23.608 [2024-08-14 06:58:50.726091] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:23.608 [2024-08-14 06:58:50.726230] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:23.608 [2024-08-14 06:58:50.726343] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:23.608 [2024-08-14 06:58:50.726411] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:23.608 [2024-08-14 06:58:50.726477] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:27:23.608 06:58:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:27:23.608 06:58:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.868 06:58:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:27:23.868 06:58:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:27:23.868 06:58:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:27:23.868 06:58:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:27:23.868 06:58:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:24.127 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:24.127 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:27:24.127 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:27:24.127 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:27:24.127 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@534 -- # i=1 00:27:24.127 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:24.127 [2024-08-14 06:58:51.368936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:24.127 [2024-08-14 06:58:51.369027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:24.127 [2024-08-14 06:58:51.369049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:27:24.127 [2024-08-14 06:58:51.369060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:24.127 [2024-08-14 06:58:51.371255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:24.127 [2024-08-14 06:58:51.371298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:24.127 [2024-08-14 06:58:51.371364] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:24.127 [2024-08-14 06:58:51.371411] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:24.127 [2024-08-14 06:58:51.371498] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:27:24.127 [2024-08-14 06:58:51.371508] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:24.127 [2024-08-14 06:58:51.371583] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:27:24.127 [2024-08-14 06:58:51.371661] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:27:24.127 [2024-08-14 06:58:51.371669] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:27:24.127 [2024-08-14 06:58:51.371753] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:24.127 pt2 00:27:24.386 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:24.386 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:24.386 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:24.386 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:24.386 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:24.386 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:24.386 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:24.386 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:24.386 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:24.386 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:24.386 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.386 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.386 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:24.386 "name": "raid_bdev1", 00:27:24.386 "uuid": "df994e0f-e713-47b9-99e0-bc2ca62710eb", 00:27:24.386 "strip_size_kb": 0, 00:27:24.386 "state": "online", 00:27:24.386 "raid_level": "raid1", 00:27:24.386 "superblock": true, 00:27:24.386 "num_base_bdevs": 2, 00:27:24.386 "num_base_bdevs_discovered": 1, 00:27:24.386 "num_base_bdevs_operational": 1, 00:27:24.386 "base_bdevs_list": [ 00:27:24.386 { 00:27:24.386 "name": null, 00:27:24.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.386 "is_configured": false, 00:27:24.386 "data_offset": 256, 00:27:24.386 "data_size": 7936 00:27:24.386 }, 00:27:24.386 { 00:27:24.386 "name": "pt2", 00:27:24.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:24.386 "is_configured": true, 00:27:24.386 "data_offset": 256, 00:27:24.386 "data_size": 7936 00:27:24.386 } 00:27:24.386 ] 00:27:24.386 }' 00:27:24.387 06:58:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:24.387 06:58:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:24.956 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:25.215 [2024-08-14 06:58:52.323299] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:25.215 [2024-08-14 06:58:52.323431] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:25.215 [2024-08-14 06:58:52.323520] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:25.215 [2024-08-14 06:58:52.323575] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:25.215 [2024-08-14 06:58:52.323585] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:27:25.215 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.215 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:27:25.475 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:27:25.475 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:27:25.475 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:27:25.475 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:25.735 [2024-08-14 06:58:52.750579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:25.735 [2024-08-14 06:58:52.750660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:25.735 [2024-08-14 06:58:52.750685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:25.735 [2024-08-14 06:58:52.750694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:25.735 [2024-08-14 06:58:52.752788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:25.735 [2024-08-14 06:58:52.752830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:25.735 [2024-08-14 06:58:52.752895] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:25.735 [2024-08-14 06:58:52.752931] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:25.735 [2024-08-14 06:58:52.753071] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:25.735 [2024-08-14 06:58:52.753093] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:25.735 [2024-08-14 06:58:52.753115] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:27:25.735 [2024-08-14 06:58:52.753146] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:25.735 [2024-08-14 06:58:52.753241] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:27:25.735 [2024-08-14 06:58:52.753250] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:25.735 [2024-08-14 06:58:52.753328] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:27:25.735 [2024-08-14 06:58:52.753408] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:27:25.735 [2024-08-14 06:58:52.753419] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:27:25.735 [2024-08-14 06:58:52.753504] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:25.735 pt1 00:27:25.735 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:27:25.735 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:25.735 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:25.735 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:25.735 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:25.735 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:25.735 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:25.735 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:25.735 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:25.735 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:25.735 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:25.735 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.735 06:58:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.995 06:58:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:25.995 "name": "raid_bdev1", 00:27:25.995 "uuid": "df994e0f-e713-47b9-99e0-bc2ca62710eb", 00:27:25.995 "strip_size_kb": 0, 00:27:25.995 "state": "online", 00:27:25.995 "raid_level": "raid1", 00:27:25.995 "superblock": true, 00:27:25.995 "num_base_bdevs": 2, 00:27:25.995 "num_base_bdevs_discovered": 1, 00:27:25.995 "num_base_bdevs_operational": 1, 00:27:25.995 "base_bdevs_list": [ 00:27:25.995 { 00:27:25.995 "name": null, 00:27:25.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.995 "is_configured": false, 00:27:25.995 "data_offset": 256, 00:27:25.995 "data_size": 7936 00:27:25.995 }, 00:27:25.995 { 00:27:25.995 "name": "pt2", 00:27:25.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:25.995 "is_configured": true, 00:27:25.995 "data_offset": 256, 00:27:25.995 "data_size": 7936 00:27:25.995 } 00:27:25.995 ] 00:27:25.995 }' 00:27:25.995 06:58:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:25.995 06:58:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:26.564 06:58:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:26.564 06:58:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:26.564 06:58:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:27:26.564 06:58:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:26.564 06:58:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:27:26.823 [2024-08-14 06:58:53.996949] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:26.823 06:58:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # '[' df994e0f-e713-47b9-99e0-bc2ca62710eb '!=' df994e0f-e713-47b9-99e0-bc2ca62710eb ']' 00:27:26.824 06:58:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@578 -- # killprocess 108200 00:27:26.824 06:58:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@946 -- # '[' -z 108200 ']' 00:27:26.824 06:58:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # kill -0 108200 00:27:26.824 06:58:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # uname 00:27:26.824 06:58:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:26.824 06:58:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 108200 00:27:26.824 06:58:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:26.824 06:58:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:26.824 06:58:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 108200' 00:27:26.824 killing process with pid 108200 00:27:26.824 06:58:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@965 -- # kill 108200 00:27:26.824 [2024-08-14 06:58:54.062593] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:26.824 [2024-08-14 06:58:54.062709] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:26.824 [2024-08-14 06:58:54.062769] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:26.824 [2024-08-14 06:58:54.062782] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:27:26.824 06:58:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # wait 108200 00:27:27.083 [2024-08-14 06:58:54.088991] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:27.083 ************************************ 00:27:27.083 END TEST raid_superblock_test_md_separate 00:27:27.083 ************************************ 00:27:27.083 06:58:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@580 -- # return 0 00:27:27.083 00:27:27.083 real 0m14.333s 00:27:27.083 user 0m26.275s 00:27:27.083 sys 0m2.170s 00:27:27.083 06:58:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:27.083 06:58:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:27.343 06:58:54 bdev_raid -- bdev/bdev_raid.sh@985 -- # '[' true = true ']' 00:27:27.343 06:58:54 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:27:27.343 06:58:54 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:27:27.343 06:58:54 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:27.343 06:58:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:27.343 ************************************ 00:27:27.343 START TEST raid_rebuild_test_sb_md_separate 00:27:27.343 ************************************ 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false true 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # local verify=true 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # local strip_size 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # local create_arg 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@594 -- # local data_offset 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # raid_pid=108684 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # waitforlisten 108684 /var/tmp/spdk-raid.sock 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@827 -- # '[' -z 108684 ']' 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:27.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:27.343 06:58:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:27.343 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:27.343 Zero copy mechanism will not be used. 00:27:27.343 [2024-08-14 06:58:54.499189] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:27:27.343 [2024-08-14 06:58:54.499331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108684 ] 00:27:27.603 [2024-08-14 06:58:54.647136] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.603 [2024-08-14 06:58:54.699640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.603 [2024-08-14 06:58:54.743769] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:27.603 [2024-08-14 06:58:54.743817] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:28.174 06:58:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:28.174 06:58:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # return 0 00:27:28.174 06:58:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:28.174 06:58:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:27:28.433 BaseBdev1_malloc 00:27:28.433 06:58:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:28.691 [2024-08-14 06:58:55.814397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:28.691 [2024-08-14 06:58:55.814612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:28.691 [2024-08-14 06:58:55.814648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:27:28.691 [2024-08-14 06:58:55.814662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:28.691 [2024-08-14 06:58:55.816800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:28.691 [2024-08-14 06:58:55.816850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:28.691 BaseBdev1 00:27:28.691 06:58:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:28.691 06:58:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:27:28.950 BaseBdev2_malloc 00:27:28.951 06:58:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:29.209 [2024-08-14 06:58:56.253493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:29.209 [2024-08-14 06:58:56.253583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:29.209 [2024-08-14 06:58:56.253608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:29.209 [2024-08-14 06:58:56.253623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:29.209 [2024-08-14 06:58:56.255782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:29.209 [2024-08-14 06:58:56.255835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:29.209 BaseBdev2 00:27:29.209 06:58:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:27:29.468 spare_malloc 00:27:29.468 06:58:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:29.727 spare_delay 00:27:29.727 06:58:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:29.727 [2024-08-14 06:58:56.954237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:29.727 [2024-08-14 06:58:56.954320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:29.727 [2024-08-14 06:58:56.954346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:29.727 [2024-08-14 06:58:56.954357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:29.727 [2024-08-14 06:58:56.956523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:29.727 spare 00:27:29.727 [2024-08-14 06:58:56.956636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:29.727 06:58:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:29.986 [2024-08-14 06:58:57.153997] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:29.986 [2024-08-14 06:58:57.156131] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:29.986 [2024-08-14 06:58:57.156425] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:27:29.986 [2024-08-14 06:58:57.156474] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:29.986 [2024-08-14 06:58:57.156670] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:27:29.986 [2024-08-14 06:58:57.156844] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:27:29.986 [2024-08-14 06:58:57.156892] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:27:29.986 [2024-08-14 06:58:57.157047] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:29.986 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:29.986 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:29.986 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:29.986 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:29.986 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:29.986 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:29.986 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:29.986 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:29.986 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:29.986 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:29.986 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.986 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.246 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:30.246 "name": "raid_bdev1", 00:27:30.246 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:30.246 "strip_size_kb": 0, 00:27:30.246 "state": "online", 00:27:30.246 "raid_level": "raid1", 00:27:30.246 "superblock": true, 00:27:30.246 "num_base_bdevs": 2, 00:27:30.246 "num_base_bdevs_discovered": 2, 00:27:30.246 "num_base_bdevs_operational": 2, 00:27:30.246 "base_bdevs_list": [ 00:27:30.246 { 00:27:30.246 "name": "BaseBdev1", 00:27:30.246 "uuid": "996bebda-0eb0-5ed9-812c-a63b87c358bf", 00:27:30.246 "is_configured": true, 00:27:30.246 "data_offset": 256, 00:27:30.246 "data_size": 7936 00:27:30.246 }, 00:27:30.246 { 00:27:30.246 "name": "BaseBdev2", 00:27:30.246 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:30.246 "is_configured": true, 00:27:30.246 "data_offset": 256, 00:27:30.246 "data_size": 7936 00:27:30.246 } 00:27:30.246 ] 00:27:30.246 }' 00:27:30.246 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:30.246 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:30.814 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:30.814 06:58:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:27:31.073 [2024-08-14 06:58:58.216389] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:31.073 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:27:31.073 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.073 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:31.332 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:27:31.332 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:27:31.332 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:27:31.332 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:27:31.332 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:31.332 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:31.332 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:31.332 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:31.332 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:31.332 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:31.332 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:27:31.332 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:31.332 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:31.332 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:31.591 [2024-08-14 06:58:58.663430] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:27:31.591 /dev/nbd0 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # local i 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # break 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:31.591 1+0 records in 00:27:31.591 1+0 records out 00:27:31.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241363 s, 17.0 MB/s 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # size=4096 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # return 0 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:27:31.591 06:58:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:27:32.159 7936+0 records in 00:27:32.159 7936+0 records out 00:27:32.159 32505856 bytes (33 MB, 31 MiB) copied, 0.610553 s, 53.2 MB/s 00:27:32.159 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:32.159 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:32.159 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:32.159 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:32.159 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:27:32.159 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:32.159 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:32.418 [2024-08-14 06:58:59.553343] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:32.418 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:32.418 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:32.418 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:32.418 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:32.418 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:32.418 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:32.418 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:32.418 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:32.418 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:32.676 [2024-08-14 06:58:59.754525] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:32.676 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:32.676 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:32.676 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:32.676 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:32.676 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:32.676 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:32.676 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:32.676 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:32.676 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:32.676 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:32.677 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.677 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.935 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:32.935 "name": "raid_bdev1", 00:27:32.935 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:32.935 "strip_size_kb": 0, 00:27:32.935 "state": "online", 00:27:32.935 "raid_level": "raid1", 00:27:32.935 "superblock": true, 00:27:32.935 "num_base_bdevs": 2, 00:27:32.935 "num_base_bdevs_discovered": 1, 00:27:32.935 "num_base_bdevs_operational": 1, 00:27:32.935 "base_bdevs_list": [ 00:27:32.935 { 00:27:32.935 "name": null, 00:27:32.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.935 "is_configured": false, 00:27:32.935 "data_offset": 256, 00:27:32.935 "data_size": 7936 00:27:32.935 }, 00:27:32.935 { 00:27:32.935 "name": "BaseBdev2", 00:27:32.935 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:32.935 "is_configured": true, 00:27:32.935 "data_offset": 256, 00:27:32.935 "data_size": 7936 00:27:32.935 } 00:27:32.935 ] 00:27:32.935 }' 00:27:32.935 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:32.935 06:58:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:33.503 06:59:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:33.503 [2024-08-14 06:59:00.752937] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:33.503 [2024-08-14 06:59:00.754936] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:27:33.503 [2024-08-14 06:59:00.756874] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:33.763 06:59:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:34.709 06:59:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:34.709 06:59:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:34.709 06:59:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:34.709 06:59:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:34.709 06:59:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:34.709 06:59:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.709 06:59:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.969 06:59:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:34.969 "name": "raid_bdev1", 00:27:34.969 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:34.969 "strip_size_kb": 0, 00:27:34.969 "state": "online", 00:27:34.969 "raid_level": "raid1", 00:27:34.969 "superblock": true, 00:27:34.969 "num_base_bdevs": 2, 00:27:34.969 "num_base_bdevs_discovered": 2, 00:27:34.969 "num_base_bdevs_operational": 2, 00:27:34.969 "process": { 00:27:34.969 "type": "rebuild", 00:27:34.969 "target": "spare", 00:27:34.969 "progress": { 00:27:34.969 "blocks": 3072, 00:27:34.969 "percent": 38 00:27:34.969 } 00:27:34.969 }, 00:27:34.969 "base_bdevs_list": [ 00:27:34.969 { 00:27:34.969 "name": "spare", 00:27:34.969 "uuid": "7c2ade1e-10e3-5619-87a8-d3ae0fd27c12", 00:27:34.969 "is_configured": true, 00:27:34.969 "data_offset": 256, 00:27:34.969 "data_size": 7936 00:27:34.969 }, 00:27:34.969 { 00:27:34.969 "name": "BaseBdev2", 00:27:34.969 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:34.969 "is_configured": true, 00:27:34.969 "data_offset": 256, 00:27:34.969 "data_size": 7936 00:27:34.969 } 00:27:34.969 ] 00:27:34.969 }' 00:27:34.969 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:34.969 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:34.969 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:34.969 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:34.969 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:35.334 [2024-08-14 06:59:02.287675] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:35.334 [2024-08-14 06:59:02.364863] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:35.334 [2024-08-14 06:59:02.364961] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:35.334 [2024-08-14 06:59:02.364980] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:35.334 [2024-08-14 06:59:02.364993] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:35.334 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:35.334 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:35.334 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:35.334 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:35.334 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:35.334 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:35.334 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:35.334 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:35.334 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:35.334 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:35.334 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.334 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.592 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:35.592 "name": "raid_bdev1", 00:27:35.592 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:35.592 "strip_size_kb": 0, 00:27:35.592 "state": "online", 00:27:35.592 "raid_level": "raid1", 00:27:35.593 "superblock": true, 00:27:35.593 "num_base_bdevs": 2, 00:27:35.593 "num_base_bdevs_discovered": 1, 00:27:35.593 "num_base_bdevs_operational": 1, 00:27:35.593 "base_bdevs_list": [ 00:27:35.593 { 00:27:35.593 "name": null, 00:27:35.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:35.593 "is_configured": false, 00:27:35.593 "data_offset": 256, 00:27:35.593 "data_size": 7936 00:27:35.593 }, 00:27:35.593 { 00:27:35.593 "name": "BaseBdev2", 00:27:35.593 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:35.593 "is_configured": true, 00:27:35.593 "data_offset": 256, 00:27:35.593 "data_size": 7936 00:27:35.593 } 00:27:35.593 ] 00:27:35.593 }' 00:27:35.593 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:35.593 06:59:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:36.162 06:59:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:36.162 06:59:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:36.162 06:59:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:36.162 06:59:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:36.162 06:59:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:36.162 06:59:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.162 06:59:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.162 06:59:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:36.162 "name": "raid_bdev1", 00:27:36.162 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:36.162 "strip_size_kb": 0, 00:27:36.162 "state": "online", 00:27:36.162 "raid_level": "raid1", 00:27:36.162 "superblock": true, 00:27:36.162 "num_base_bdevs": 2, 00:27:36.162 "num_base_bdevs_discovered": 1, 00:27:36.162 "num_base_bdevs_operational": 1, 00:27:36.162 "base_bdevs_list": [ 00:27:36.162 { 00:27:36.162 "name": null, 00:27:36.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.162 "is_configured": false, 00:27:36.162 "data_offset": 256, 00:27:36.162 "data_size": 7936 00:27:36.162 }, 00:27:36.162 { 00:27:36.162 "name": "BaseBdev2", 00:27:36.162 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:36.162 "is_configured": true, 00:27:36.162 "data_offset": 256, 00:27:36.162 "data_size": 7936 00:27:36.162 } 00:27:36.162 ] 00:27:36.162 }' 00:27:36.162 06:59:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:36.420 06:59:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:36.420 06:59:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:36.420 06:59:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:36.420 06:59:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:36.690 [2024-08-14 06:59:03.678213] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:36.690 [2024-08-14 06:59:03.680081] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:27:36.690 [2024-08-14 06:59:03.681922] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:36.690 06:59:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@678 -- # sleep 1 00:27:37.629 06:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:37.629 06:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:37.629 06:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:37.629 06:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:37.629 06:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:37.629 06:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.629 06:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.888 06:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:37.888 "name": "raid_bdev1", 00:27:37.888 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:37.888 "strip_size_kb": 0, 00:27:37.888 "state": "online", 00:27:37.888 "raid_level": "raid1", 00:27:37.888 "superblock": true, 00:27:37.888 "num_base_bdevs": 2, 00:27:37.888 "num_base_bdevs_discovered": 2, 00:27:37.888 "num_base_bdevs_operational": 2, 00:27:37.888 "process": { 00:27:37.888 "type": "rebuild", 00:27:37.888 "target": "spare", 00:27:37.888 "progress": { 00:27:37.888 "blocks": 3072, 00:27:37.888 "percent": 38 00:27:37.888 } 00:27:37.888 }, 00:27:37.888 "base_bdevs_list": [ 00:27:37.888 { 00:27:37.888 "name": "spare", 00:27:37.888 "uuid": "7c2ade1e-10e3-5619-87a8-d3ae0fd27c12", 00:27:37.888 "is_configured": true, 00:27:37.888 "data_offset": 256, 00:27:37.888 "data_size": 7936 00:27:37.888 }, 00:27:37.888 { 00:27:37.888 "name": "BaseBdev2", 00:27:37.888 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:37.888 "is_configured": true, 00:27:37.888 "data_offset": 256, 00:27:37.888 "data_size": 7936 00:27:37.888 } 00:27:37.888 ] 00:27:37.888 }' 00:27:37.888 06:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:37.888 06:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:37.888 06:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:37.888 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:37.888 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:27:37.888 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:27:37.888 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:27:37.888 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:27:37.888 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:27:37.888 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:27:37.888 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # local timeout=1304 00:27:37.888 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:37.888 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:37.888 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:37.888 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:37.888 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:37.888 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:37.888 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.888 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.148 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:38.148 "name": "raid_bdev1", 00:27:38.148 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:38.148 "strip_size_kb": 0, 00:27:38.148 "state": "online", 00:27:38.148 "raid_level": "raid1", 00:27:38.148 "superblock": true, 00:27:38.148 "num_base_bdevs": 2, 00:27:38.148 "num_base_bdevs_discovered": 2, 00:27:38.148 "num_base_bdevs_operational": 2, 00:27:38.148 "process": { 00:27:38.148 "type": "rebuild", 00:27:38.148 "target": "spare", 00:27:38.148 "progress": { 00:27:38.148 "blocks": 3840, 00:27:38.148 "percent": 48 00:27:38.148 } 00:27:38.148 }, 00:27:38.148 "base_bdevs_list": [ 00:27:38.148 { 00:27:38.148 "name": "spare", 00:27:38.148 "uuid": "7c2ade1e-10e3-5619-87a8-d3ae0fd27c12", 00:27:38.148 "is_configured": true, 00:27:38.148 "data_offset": 256, 00:27:38.148 "data_size": 7936 00:27:38.148 }, 00:27:38.148 { 00:27:38.148 "name": "BaseBdev2", 00:27:38.148 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:38.148 "is_configured": true, 00:27:38.148 "data_offset": 256, 00:27:38.148 "data_size": 7936 00:27:38.148 } 00:27:38.148 ] 00:27:38.148 }' 00:27:38.148 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:38.148 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:38.148 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:38.148 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:38.148 06:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:39.526 06:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:39.526 06:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:39.526 06:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:39.526 06:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:39.526 06:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:39.526 06:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:39.526 06:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.526 06:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.526 06:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:39.526 "name": "raid_bdev1", 00:27:39.526 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:39.526 "strip_size_kb": 0, 00:27:39.526 "state": "online", 00:27:39.526 "raid_level": "raid1", 00:27:39.526 "superblock": true, 00:27:39.526 "num_base_bdevs": 2, 00:27:39.526 "num_base_bdevs_discovered": 2, 00:27:39.526 "num_base_bdevs_operational": 2, 00:27:39.526 "process": { 00:27:39.526 "type": "rebuild", 00:27:39.526 "target": "spare", 00:27:39.526 "progress": { 00:27:39.526 "blocks": 7168, 00:27:39.526 "percent": 90 00:27:39.526 } 00:27:39.526 }, 00:27:39.526 "base_bdevs_list": [ 00:27:39.527 { 00:27:39.527 "name": "spare", 00:27:39.527 "uuid": "7c2ade1e-10e3-5619-87a8-d3ae0fd27c12", 00:27:39.527 "is_configured": true, 00:27:39.527 "data_offset": 256, 00:27:39.527 "data_size": 7936 00:27:39.527 }, 00:27:39.527 { 00:27:39.527 "name": "BaseBdev2", 00:27:39.527 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:39.527 "is_configured": true, 00:27:39.527 "data_offset": 256, 00:27:39.527 "data_size": 7936 00:27:39.527 } 00:27:39.527 ] 00:27:39.527 }' 00:27:39.527 06:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:39.527 06:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:39.527 06:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:39.527 06:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:39.527 06:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:39.786 [2024-08-14 06:59:06.795706] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:39.786 [2024-08-14 06:59:06.795877] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:39.786 [2024-08-14 06:59:06.796052] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:40.724 06:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:40.724 06:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:40.724 06:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:40.724 06:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:40.724 06:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:40.724 06:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:40.724 06:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.724 06:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.724 06:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:40.724 "name": "raid_bdev1", 00:27:40.724 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:40.724 "strip_size_kb": 0, 00:27:40.724 "state": "online", 00:27:40.724 "raid_level": "raid1", 00:27:40.724 "superblock": true, 00:27:40.724 "num_base_bdevs": 2, 00:27:40.724 "num_base_bdevs_discovered": 2, 00:27:40.724 "num_base_bdevs_operational": 2, 00:27:40.724 "base_bdevs_list": [ 00:27:40.724 { 00:27:40.724 "name": "spare", 00:27:40.724 "uuid": "7c2ade1e-10e3-5619-87a8-d3ae0fd27c12", 00:27:40.724 "is_configured": true, 00:27:40.724 "data_offset": 256, 00:27:40.724 "data_size": 7936 00:27:40.724 }, 00:27:40.724 { 00:27:40.724 "name": "BaseBdev2", 00:27:40.724 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:40.724 "is_configured": true, 00:27:40.724 "data_offset": 256, 00:27:40.724 "data_size": 7936 00:27:40.724 } 00:27:40.724 ] 00:27:40.724 }' 00:27:40.724 06:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:40.724 06:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:40.724 06:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:40.983 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:40.983 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@724 -- # break 00:27:40.983 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:40.983 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:40.983 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:40.983 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:40.983 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:40.983 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.983 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.983 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:40.983 "name": "raid_bdev1", 00:27:40.983 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:40.983 "strip_size_kb": 0, 00:27:40.983 "state": "online", 00:27:40.983 "raid_level": "raid1", 00:27:40.983 "superblock": true, 00:27:40.983 "num_base_bdevs": 2, 00:27:40.983 "num_base_bdevs_discovered": 2, 00:27:40.983 "num_base_bdevs_operational": 2, 00:27:40.983 "base_bdevs_list": [ 00:27:40.983 { 00:27:40.983 "name": "spare", 00:27:40.983 "uuid": "7c2ade1e-10e3-5619-87a8-d3ae0fd27c12", 00:27:40.983 "is_configured": true, 00:27:40.983 "data_offset": 256, 00:27:40.983 "data_size": 7936 00:27:40.983 }, 00:27:40.983 { 00:27:40.983 "name": "BaseBdev2", 00:27:40.983 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:40.983 "is_configured": true, 00:27:40.983 "data_offset": 256, 00:27:40.983 "data_size": 7936 00:27:40.983 } 00:27:40.983 ] 00:27:40.983 }' 00:27:40.983 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:41.243 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:41.243 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:41.243 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:41.243 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:41.243 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:41.243 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:41.243 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:41.243 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:41.243 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:41.243 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:41.243 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:41.243 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:41.243 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:41.243 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.243 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.503 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:41.503 "name": "raid_bdev1", 00:27:41.503 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:41.503 "strip_size_kb": 0, 00:27:41.503 "state": "online", 00:27:41.503 "raid_level": "raid1", 00:27:41.503 "superblock": true, 00:27:41.503 "num_base_bdevs": 2, 00:27:41.503 "num_base_bdevs_discovered": 2, 00:27:41.503 "num_base_bdevs_operational": 2, 00:27:41.503 "base_bdevs_list": [ 00:27:41.503 { 00:27:41.503 "name": "spare", 00:27:41.503 "uuid": "7c2ade1e-10e3-5619-87a8-d3ae0fd27c12", 00:27:41.503 "is_configured": true, 00:27:41.503 "data_offset": 256, 00:27:41.503 "data_size": 7936 00:27:41.503 }, 00:27:41.503 { 00:27:41.503 "name": "BaseBdev2", 00:27:41.503 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:41.503 "is_configured": true, 00:27:41.503 "data_offset": 256, 00:27:41.503 "data_size": 7936 00:27:41.503 } 00:27:41.503 ] 00:27:41.503 }' 00:27:41.503 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:41.503 06:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:42.072 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:42.072 [2024-08-14 06:59:09.319258] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:42.072 [2024-08-14 06:59:09.319366] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:42.072 [2024-08-14 06:59:09.319483] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:42.072 [2024-08-14 06:59:09.319590] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:42.072 [2024-08-14 06:59:09.319629] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:27:42.332 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.332 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # jq length 00:27:42.332 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:27:42.332 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:27:42.332 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:27:42.332 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:42.332 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:42.332 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:42.332 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:42.332 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:42.332 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:42.332 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:27:42.332 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:42.332 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:42.332 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:42.592 /dev/nbd0 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # local i 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # break 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:42.592 1+0 records in 00:27:42.592 1+0 records out 00:27:42.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386597 s, 10.6 MB/s 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # size=4096 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # return 0 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:42.592 06:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:42.851 /dev/nbd1 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # local i 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # break 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:42.851 1+0 records in 00:27:42.851 1+0 records out 00:27:42.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342159 s, 12.0 MB/s 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # size=4096 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # return 0 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:42.851 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:43.111 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:43.111 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:43.111 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:43.111 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:43.111 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:27:43.111 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:43.111 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:43.111 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:43.111 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:43.111 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:43.111 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:43.111 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:43.111 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:43.111 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:43.111 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:43.370 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:43.370 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:43.370 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:43.370 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:43.370 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:43.370 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:43.370 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:43.370 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:43.370 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:43.370 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:43.370 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:27:43.370 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:43.630 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:43.889 [2024-08-14 06:59:10.978101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:43.889 [2024-08-14 06:59:10.978201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:43.889 [2024-08-14 06:59:10.978229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:27:43.889 [2024-08-14 06:59:10.978239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:43.889 [2024-08-14 06:59:10.980236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:43.889 [2024-08-14 06:59:10.980274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:43.889 [2024-08-14 06:59:10.980348] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:43.889 [2024-08-14 06:59:10.980397] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:43.889 [2024-08-14 06:59:10.980527] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:43.889 spare 00:27:43.889 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:43.889 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:43.889 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:43.889 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:43.889 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:43.889 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:43.889 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:43.889 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:43.889 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:43.889 06:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:43.889 06:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.889 06:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.889 [2024-08-14 06:59:11.080426] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:27:43.889 [2024-08-14 06:59:11.080575] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:43.889 [2024-08-14 06:59:11.080776] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:27:43.889 [2024-08-14 06:59:11.080956] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:27:43.889 [2024-08-14 06:59:11.080997] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:27:43.889 [2024-08-14 06:59:11.081129] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:44.148 06:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:44.148 "name": "raid_bdev1", 00:27:44.148 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:44.148 "strip_size_kb": 0, 00:27:44.148 "state": "online", 00:27:44.148 "raid_level": "raid1", 00:27:44.148 "superblock": true, 00:27:44.148 "num_base_bdevs": 2, 00:27:44.148 "num_base_bdevs_discovered": 2, 00:27:44.148 "num_base_bdevs_operational": 2, 00:27:44.148 "base_bdevs_list": [ 00:27:44.148 { 00:27:44.148 "name": "spare", 00:27:44.148 "uuid": "7c2ade1e-10e3-5619-87a8-d3ae0fd27c12", 00:27:44.148 "is_configured": true, 00:27:44.148 "data_offset": 256, 00:27:44.148 "data_size": 7936 00:27:44.148 }, 00:27:44.148 { 00:27:44.148 "name": "BaseBdev2", 00:27:44.148 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:44.148 "is_configured": true, 00:27:44.148 "data_offset": 256, 00:27:44.148 "data_size": 7936 00:27:44.148 } 00:27:44.148 ] 00:27:44.148 }' 00:27:44.148 06:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:44.148 06:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.717 06:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:44.717 06:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:44.717 06:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:44.717 06:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:44.717 06:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:44.717 06:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.717 06:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.977 06:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:44.977 "name": "raid_bdev1", 00:27:44.977 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:44.977 "strip_size_kb": 0, 00:27:44.977 "state": "online", 00:27:44.977 "raid_level": "raid1", 00:27:44.977 "superblock": true, 00:27:44.977 "num_base_bdevs": 2, 00:27:44.977 "num_base_bdevs_discovered": 2, 00:27:44.977 "num_base_bdevs_operational": 2, 00:27:44.977 "base_bdevs_list": [ 00:27:44.977 { 00:27:44.977 "name": "spare", 00:27:44.977 "uuid": "7c2ade1e-10e3-5619-87a8-d3ae0fd27c12", 00:27:44.977 "is_configured": true, 00:27:44.977 "data_offset": 256, 00:27:44.977 "data_size": 7936 00:27:44.977 }, 00:27:44.977 { 00:27:44.977 "name": "BaseBdev2", 00:27:44.977 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:44.977 "is_configured": true, 00:27:44.977 "data_offset": 256, 00:27:44.977 "data_size": 7936 00:27:44.977 } 00:27:44.977 ] 00:27:44.977 }' 00:27:44.977 06:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:44.977 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:44.977 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:44.977 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:44.977 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.977 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:45.238 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:27:45.238 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:45.238 [2024-08-14 06:59:12.483624] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:45.501 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:45.501 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:45.501 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:45.501 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:45.501 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:45.501 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:45.501 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:45.501 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:45.501 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:45.501 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:45.501 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.501 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.501 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:45.501 "name": "raid_bdev1", 00:27:45.501 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:45.501 "strip_size_kb": 0, 00:27:45.501 "state": "online", 00:27:45.501 "raid_level": "raid1", 00:27:45.501 "superblock": true, 00:27:45.501 "num_base_bdevs": 2, 00:27:45.501 "num_base_bdevs_discovered": 1, 00:27:45.501 "num_base_bdevs_operational": 1, 00:27:45.501 "base_bdevs_list": [ 00:27:45.501 { 00:27:45.501 "name": null, 00:27:45.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.501 "is_configured": false, 00:27:45.501 "data_offset": 256, 00:27:45.501 "data_size": 7936 00:27:45.501 }, 00:27:45.501 { 00:27:45.501 "name": "BaseBdev2", 00:27:45.501 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:45.501 "is_configured": true, 00:27:45.501 "data_offset": 256, 00:27:45.501 "data_size": 7936 00:27:45.501 } 00:27:45.501 ] 00:27:45.501 }' 00:27:45.501 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:45.501 06:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.071 06:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:46.331 [2024-08-14 06:59:13.442020] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:46.331 [2024-08-14 06:59:13.442266] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:46.331 [2024-08-14 06:59:13.442286] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:46.331 [2024-08-14 06:59:13.442337] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:46.331 [2024-08-14 06:59:13.444043] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:27:46.331 [2024-08-14 06:59:13.445933] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:46.331 06:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # sleep 1 00:27:47.275 06:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.275 06:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.275 06:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:47.275 06:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:47.275 06:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.275 06:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.275 06:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.535 06:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:47.535 "name": "raid_bdev1", 00:27:47.535 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:47.535 "strip_size_kb": 0, 00:27:47.535 "state": "online", 00:27:47.535 "raid_level": "raid1", 00:27:47.535 "superblock": true, 00:27:47.535 "num_base_bdevs": 2, 00:27:47.535 "num_base_bdevs_discovered": 2, 00:27:47.535 "num_base_bdevs_operational": 2, 00:27:47.535 "process": { 00:27:47.535 "type": "rebuild", 00:27:47.535 "target": "spare", 00:27:47.535 "progress": { 00:27:47.535 "blocks": 3072, 00:27:47.535 "percent": 38 00:27:47.535 } 00:27:47.535 }, 00:27:47.535 "base_bdevs_list": [ 00:27:47.535 { 00:27:47.535 "name": "spare", 00:27:47.535 "uuid": "7c2ade1e-10e3-5619-87a8-d3ae0fd27c12", 00:27:47.535 "is_configured": true, 00:27:47.535 "data_offset": 256, 00:27:47.535 "data_size": 7936 00:27:47.535 }, 00:27:47.535 { 00:27:47.535 "name": "BaseBdev2", 00:27:47.535 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:47.535 "is_configured": true, 00:27:47.535 "data_offset": 256, 00:27:47.535 "data_size": 7936 00:27:47.535 } 00:27:47.535 ] 00:27:47.535 }' 00:27:47.535 06:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:47.535 06:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:47.535 06:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:47.535 06:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:47.535 06:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:47.794 [2024-08-14 06:59:14.956638] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:48.053 [2024-08-14 06:59:15.052963] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:48.053 [2024-08-14 06:59:15.053080] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:48.053 [2024-08-14 06:59:15.053098] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:48.053 [2024-08-14 06:59:15.053108] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:48.053 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:48.053 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:48.053 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:48.053 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:48.053 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:48.053 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:48.053 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:48.053 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:48.053 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:48.053 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:48.053 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.053 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.312 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:48.312 "name": "raid_bdev1", 00:27:48.312 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:48.312 "strip_size_kb": 0, 00:27:48.312 "state": "online", 00:27:48.312 "raid_level": "raid1", 00:27:48.312 "superblock": true, 00:27:48.312 "num_base_bdevs": 2, 00:27:48.312 "num_base_bdevs_discovered": 1, 00:27:48.312 "num_base_bdevs_operational": 1, 00:27:48.312 "base_bdevs_list": [ 00:27:48.312 { 00:27:48.312 "name": null, 00:27:48.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.312 "is_configured": false, 00:27:48.312 "data_offset": 256, 00:27:48.312 "data_size": 7936 00:27:48.312 }, 00:27:48.312 { 00:27:48.312 "name": "BaseBdev2", 00:27:48.312 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:48.312 "is_configured": true, 00:27:48.312 "data_offset": 256, 00:27:48.312 "data_size": 7936 00:27:48.312 } 00:27:48.312 ] 00:27:48.312 }' 00:27:48.312 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:48.312 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.879 06:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:48.879 [2024-08-14 06:59:16.094720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:48.879 [2024-08-14 06:59:16.094904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.879 [2024-08-14 06:59:16.094955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:27:48.879 [2024-08-14 06:59:16.094993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.879 [2024-08-14 06:59:16.095280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.879 [2024-08-14 06:59:16.095345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:48.879 [2024-08-14 06:59:16.095453] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:48.879 [2024-08-14 06:59:16.095509] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:48.879 [2024-08-14 06:59:16.095557] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:48.879 [2024-08-14 06:59:16.095618] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:48.879 [2024-08-14 06:59:16.097367] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:27:48.879 [2024-08-14 06:59:16.099389] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:48.879 spare 00:27:48.879 06:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # sleep 1 00:27:50.277 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:50.277 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:50.277 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:50.277 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:50.277 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:50.277 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.277 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.277 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:50.277 "name": "raid_bdev1", 00:27:50.277 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:50.277 "strip_size_kb": 0, 00:27:50.277 "state": "online", 00:27:50.277 "raid_level": "raid1", 00:27:50.277 "superblock": true, 00:27:50.277 "num_base_bdevs": 2, 00:27:50.277 "num_base_bdevs_discovered": 2, 00:27:50.277 "num_base_bdevs_operational": 2, 00:27:50.277 "process": { 00:27:50.277 "type": "rebuild", 00:27:50.277 "target": "spare", 00:27:50.277 "progress": { 00:27:50.277 "blocks": 3072, 00:27:50.277 "percent": 38 00:27:50.277 } 00:27:50.277 }, 00:27:50.277 "base_bdevs_list": [ 00:27:50.277 { 00:27:50.277 "name": "spare", 00:27:50.277 "uuid": "7c2ade1e-10e3-5619-87a8-d3ae0fd27c12", 00:27:50.277 "is_configured": true, 00:27:50.277 "data_offset": 256, 00:27:50.277 "data_size": 7936 00:27:50.277 }, 00:27:50.277 { 00:27:50.277 "name": "BaseBdev2", 00:27:50.277 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:50.277 "is_configured": true, 00:27:50.277 "data_offset": 256, 00:27:50.277 "data_size": 7936 00:27:50.277 } 00:27:50.277 ] 00:27:50.277 }' 00:27:50.277 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:50.277 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:50.277 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:50.277 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:50.277 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:50.535 [2024-08-14 06:59:17.653992] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:50.535 [2024-08-14 06:59:17.706455] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:50.535 [2024-08-14 06:59:17.706549] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:50.535 [2024-08-14 06:59:17.706569] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:50.535 [2024-08-14 06:59:17.706577] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:50.535 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:50.535 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:50.535 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:50.535 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:50.535 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:50.535 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:50.535 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:50.535 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:50.535 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:50.535 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:50.535 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.535 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.794 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:50.794 "name": "raid_bdev1", 00:27:50.794 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:50.794 "strip_size_kb": 0, 00:27:50.794 "state": "online", 00:27:50.794 "raid_level": "raid1", 00:27:50.794 "superblock": true, 00:27:50.794 "num_base_bdevs": 2, 00:27:50.794 "num_base_bdevs_discovered": 1, 00:27:50.794 "num_base_bdevs_operational": 1, 00:27:50.794 "base_bdevs_list": [ 00:27:50.794 { 00:27:50.794 "name": null, 00:27:50.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.794 "is_configured": false, 00:27:50.794 "data_offset": 256, 00:27:50.794 "data_size": 7936 00:27:50.794 }, 00:27:50.794 { 00:27:50.794 "name": "BaseBdev2", 00:27:50.794 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:50.794 "is_configured": true, 00:27:50.794 "data_offset": 256, 00:27:50.794 "data_size": 7936 00:27:50.794 } 00:27:50.794 ] 00:27:50.794 }' 00:27:50.794 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:50.794 06:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.360 06:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:51.360 06:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:51.360 06:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:51.360 06:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:51.360 06:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:51.360 06:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.360 06:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:51.618 06:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:51.618 "name": "raid_bdev1", 00:27:51.618 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:51.618 "strip_size_kb": 0, 00:27:51.618 "state": "online", 00:27:51.618 "raid_level": "raid1", 00:27:51.618 "superblock": true, 00:27:51.618 "num_base_bdevs": 2, 00:27:51.618 "num_base_bdevs_discovered": 1, 00:27:51.618 "num_base_bdevs_operational": 1, 00:27:51.618 "base_bdevs_list": [ 00:27:51.618 { 00:27:51.618 "name": null, 00:27:51.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.618 "is_configured": false, 00:27:51.618 "data_offset": 256, 00:27:51.618 "data_size": 7936 00:27:51.618 }, 00:27:51.618 { 00:27:51.618 "name": "BaseBdev2", 00:27:51.618 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:51.618 "is_configured": true, 00:27:51.618 "data_offset": 256, 00:27:51.618 "data_size": 7936 00:27:51.618 } 00:27:51.618 ] 00:27:51.618 }' 00:27:51.618 06:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:51.618 06:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:51.618 06:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:51.618 06:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:51.618 06:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:51.876 06:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:52.134 [2024-08-14 06:59:19.219470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:52.134 [2024-08-14 06:59:19.219548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:52.134 [2024-08-14 06:59:19.219572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:27:52.134 [2024-08-14 06:59:19.219582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:52.134 [2024-08-14 06:59:19.219789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:52.134 [2024-08-14 06:59:19.219801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:52.134 [2024-08-14 06:59:19.219863] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:52.134 [2024-08-14 06:59:19.219875] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:52.134 [2024-08-14 06:59:19.219889] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:52.134 BaseBdev1 00:27:52.134 06:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@789 -- # sleep 1 00:27:53.069 06:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:53.069 06:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:53.069 06:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:53.069 06:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:53.069 06:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:53.069 06:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:53.069 06:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:53.070 06:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:53.070 06:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:53.070 06:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:53.070 06:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.070 06:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.328 06:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:53.328 "name": "raid_bdev1", 00:27:53.328 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:53.328 "strip_size_kb": 0, 00:27:53.328 "state": "online", 00:27:53.328 "raid_level": "raid1", 00:27:53.328 "superblock": true, 00:27:53.328 "num_base_bdevs": 2, 00:27:53.328 "num_base_bdevs_discovered": 1, 00:27:53.328 "num_base_bdevs_operational": 1, 00:27:53.328 "base_bdevs_list": [ 00:27:53.328 { 00:27:53.328 "name": null, 00:27:53.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.328 "is_configured": false, 00:27:53.328 "data_offset": 256, 00:27:53.328 "data_size": 7936 00:27:53.328 }, 00:27:53.328 { 00:27:53.328 "name": "BaseBdev2", 00:27:53.328 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:53.328 "is_configured": true, 00:27:53.328 "data_offset": 256, 00:27:53.328 "data_size": 7936 00:27:53.328 } 00:27:53.328 ] 00:27:53.328 }' 00:27:53.328 06:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:53.328 06:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:53.893 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:53.893 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:53.893 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:53.893 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:53.893 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:53.893 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.893 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:54.150 "name": "raid_bdev1", 00:27:54.150 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:54.150 "strip_size_kb": 0, 00:27:54.150 "state": "online", 00:27:54.150 "raid_level": "raid1", 00:27:54.150 "superblock": true, 00:27:54.150 "num_base_bdevs": 2, 00:27:54.150 "num_base_bdevs_discovered": 1, 00:27:54.150 "num_base_bdevs_operational": 1, 00:27:54.150 "base_bdevs_list": [ 00:27:54.150 { 00:27:54.150 "name": null, 00:27:54.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:54.150 "is_configured": false, 00:27:54.150 "data_offset": 256, 00:27:54.150 "data_size": 7936 00:27:54.150 }, 00:27:54.150 { 00:27:54.150 "name": "BaseBdev2", 00:27:54.150 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:54.150 "is_configured": true, 00:27:54.150 "data_offset": 256, 00:27:54.150 "data_size": 7936 00:27:54.150 } 00:27:54.150 ] 00:27:54.150 }' 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@646 -- # local es=0 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:54.150 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:54.408 [2024-08-14 06:59:21.559581] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:54.408 [2024-08-14 06:59:21.559865] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:54.408 [2024-08-14 06:59:21.559930] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:54.408 request: 00:27:54.408 { 00:27:54.408 "base_bdev": "BaseBdev1", 00:27:54.408 "raid_bdev": "raid_bdev1", 00:27:54.408 "method": "bdev_raid_add_base_bdev", 00:27:54.408 "req_id": 1 00:27:54.408 } 00:27:54.408 Got JSON-RPC error response 00:27:54.408 response: 00:27:54.408 { 00:27:54.408 "code": -22, 00:27:54.408 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:54.408 } 00:27:54.408 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@649 -- # es=1 00:27:54.408 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:27:54.408 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:27:54.408 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:27:54.408 06:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@793 -- # sleep 1 00:27:55.342 06:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:55.342 06:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:55.342 06:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:55.342 06:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:55.342 06:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:55.342 06:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:55.342 06:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:55.342 06:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:55.342 06:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:55.342 06:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:55.342 06:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.342 06:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.600 06:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:55.600 "name": "raid_bdev1", 00:27:55.600 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:55.600 "strip_size_kb": 0, 00:27:55.600 "state": "online", 00:27:55.600 "raid_level": "raid1", 00:27:55.600 "superblock": true, 00:27:55.600 "num_base_bdevs": 2, 00:27:55.600 "num_base_bdevs_discovered": 1, 00:27:55.600 "num_base_bdevs_operational": 1, 00:27:55.600 "base_bdevs_list": [ 00:27:55.600 { 00:27:55.600 "name": null, 00:27:55.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.600 "is_configured": false, 00:27:55.600 "data_offset": 256, 00:27:55.600 "data_size": 7936 00:27:55.600 }, 00:27:55.600 { 00:27:55.600 "name": "BaseBdev2", 00:27:55.600 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:55.600 "is_configured": true, 00:27:55.600 "data_offset": 256, 00:27:55.600 "data_size": 7936 00:27:55.600 } 00:27:55.600 ] 00:27:55.600 }' 00:27:55.600 06:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:55.600 06:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.223 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:56.223 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:56.223 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:56.223 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:56.223 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:56.223 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.223 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.482 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:56.482 "name": "raid_bdev1", 00:27:56.482 "uuid": "1efb7e51-f0d4-4f57-af3d-bba9b0f21c78", 00:27:56.482 "strip_size_kb": 0, 00:27:56.482 "state": "online", 00:27:56.482 "raid_level": "raid1", 00:27:56.482 "superblock": true, 00:27:56.482 "num_base_bdevs": 2, 00:27:56.482 "num_base_bdevs_discovered": 1, 00:27:56.482 "num_base_bdevs_operational": 1, 00:27:56.482 "base_bdevs_list": [ 00:27:56.482 { 00:27:56.482 "name": null, 00:27:56.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.482 "is_configured": false, 00:27:56.482 "data_offset": 256, 00:27:56.482 "data_size": 7936 00:27:56.482 }, 00:27:56.482 { 00:27:56.482 "name": "BaseBdev2", 00:27:56.482 "uuid": "769f0300-7c3b-53ee-b9a5-41f8b27660d4", 00:27:56.482 "is_configured": true, 00:27:56.482 "data_offset": 256, 00:27:56.482 "data_size": 7936 00:27:56.482 } 00:27:56.482 ] 00:27:56.482 }' 00:27:56.482 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:56.482 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:56.482 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:56.482 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:56.482 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@798 -- # killprocess 108684 00:27:56.482 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@946 -- # '[' -z 108684 ']' 00:27:56.482 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # kill -0 108684 00:27:56.482 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@951 -- # uname 00:27:56.482 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:56.482 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 108684 00:27:56.482 killing process with pid 108684 00:27:56.482 Received shutdown signal, test time was about 60.000000 seconds 00:27:56.482 00:27:56.483 Latency(us) 00:27:56.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.483 =================================================================================================================== 00:27:56.483 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:56.483 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:56.483 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:56.483 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 108684' 00:27:56.483 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@965 -- # kill 108684 00:27:56.483 [2024-08-14 06:59:23.722037] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:56.483 [2024-08-14 06:59:23.722195] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:56.483 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # wait 108684 00:27:56.483 [2024-08-14 06:59:23.722266] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:56.483 [2024-08-14 06:59:23.722276] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:27:56.741 [2024-08-14 06:59:23.756949] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:56.741 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@800 -- # return 0 00:27:56.741 00:27:56.741 real 0m29.581s 00:27:56.741 user 0m46.397s 00:27:56.741 sys 0m3.985s 00:27:56.741 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:56.741 06:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.741 ************************************ 00:27:56.741 END TEST raid_rebuild_test_sb_md_separate 00:27:56.741 ************************************ 00:27:57.000 06:59:24 bdev_raid -- bdev/bdev_raid.sh@989 -- # base_malloc_params='-m 32 -i' 00:27:57.001 06:59:24 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:27:57.001 06:59:24 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:27:57.001 06:59:24 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:57.001 06:59:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:57.001 ************************************ 00:27:57.001 START TEST raid_state_function_test_sb_md_interleaved 00:27:57.001 ************************************ 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=109487 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 109487' 00:27:57.001 Process raid pid: 109487 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 109487 /var/tmp/spdk-raid.sock 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 109487 ']' 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:57.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:57.001 06:59:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.001 [2024-08-14 06:59:24.151251] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:27:57.001 [2024-08-14 06:59:24.151462] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.260 [2024-08-14 06:59:24.297089] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.260 [2024-08-14 06:59:24.350803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.260 [2024-08-14 06:59:24.394992] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:57.260 [2024-08-14 06:59:24.395111] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:57.829 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:57.829 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:27:57.829 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:58.089 [2024-08-14 06:59:25.187639] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:58.089 [2024-08-14 06:59:25.187799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:58.089 [2024-08-14 06:59:25.187845] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:58.089 [2024-08-14 06:59:25.187872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:58.089 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:58.089 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:58.089 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:58.089 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:58.089 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:58.089 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:58.089 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:58.089 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:58.089 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:58.089 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:58.089 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:58.089 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.349 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:58.349 "name": "Existed_Raid", 00:27:58.349 "uuid": "6997281f-3ded-41cc-aba3-3cc0cf1521a4", 00:27:58.349 "strip_size_kb": 0, 00:27:58.349 "state": "configuring", 00:27:58.349 "raid_level": "raid1", 00:27:58.349 "superblock": true, 00:27:58.349 "num_base_bdevs": 2, 00:27:58.349 "num_base_bdevs_discovered": 0, 00:27:58.349 "num_base_bdevs_operational": 2, 00:27:58.349 "base_bdevs_list": [ 00:27:58.349 { 00:27:58.349 "name": "BaseBdev1", 00:27:58.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.349 "is_configured": false, 00:27:58.349 "data_offset": 0, 00:27:58.349 "data_size": 0 00:27:58.349 }, 00:27:58.349 { 00:27:58.349 "name": "BaseBdev2", 00:27:58.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.349 "is_configured": false, 00:27:58.349 "data_offset": 0, 00:27:58.349 "data_size": 0 00:27:58.349 } 00:27:58.349 ] 00:27:58.349 }' 00:27:58.349 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:58.349 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:58.917 06:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:58.917 [2024-08-14 06:59:26.169736] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:58.917 [2024-08-14 06:59:26.169781] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:27:59.177 06:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:59.177 [2024-08-14 06:59:26.385406] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:59.177 [2024-08-14 06:59:26.385467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:59.177 [2024-08-14 06:59:26.385491] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:59.177 [2024-08-14 06:59:26.385501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:59.177 06:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:27:59.437 [2024-08-14 06:59:26.622488] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:59.437 BaseBdev1 00:27:59.437 06:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:27:59.437 06:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:27:59.437 06:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:59.437 06:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:27:59.437 06:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:59.437 06:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:59.437 06:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:59.697 06:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:59.957 [ 00:27:59.957 { 00:27:59.957 "name": "BaseBdev1", 00:27:59.957 "aliases": [ 00:27:59.957 "95dce135-d9c4-4ab0-a26a-43202ce7f216" 00:27:59.957 ], 00:27:59.957 "product_name": "Malloc disk", 00:27:59.957 "block_size": 4128, 00:27:59.957 "num_blocks": 8192, 00:27:59.957 "uuid": "95dce135-d9c4-4ab0-a26a-43202ce7f216", 00:27:59.957 "md_size": 32, 00:27:59.957 "md_interleave": true, 00:27:59.957 "dif_type": 0, 00:27:59.957 "assigned_rate_limits": { 00:27:59.957 "rw_ios_per_sec": 0, 00:27:59.957 "rw_mbytes_per_sec": 0, 00:27:59.957 "r_mbytes_per_sec": 0, 00:27:59.957 "w_mbytes_per_sec": 0 00:27:59.957 }, 00:27:59.957 "claimed": true, 00:27:59.957 "claim_type": "exclusive_write", 00:27:59.957 "zoned": false, 00:27:59.957 "supported_io_types": { 00:27:59.957 "read": true, 00:27:59.957 "write": true, 00:27:59.957 "unmap": true, 00:27:59.957 "flush": true, 00:27:59.957 "reset": true, 00:27:59.957 "nvme_admin": false, 00:27:59.957 "nvme_io": false, 00:27:59.957 "nvme_io_md": false, 00:27:59.957 "write_zeroes": true, 00:27:59.957 "zcopy": true, 00:27:59.957 "get_zone_info": false, 00:27:59.957 "zone_management": false, 00:27:59.957 "zone_append": false, 00:27:59.957 "compare": false, 00:27:59.957 "compare_and_write": false, 00:27:59.957 "abort": true, 00:27:59.957 "seek_hole": false, 00:27:59.957 "seek_data": false, 00:27:59.957 "copy": true, 00:27:59.957 "nvme_iov_md": false 00:27:59.957 }, 00:27:59.957 "memory_domains": [ 00:27:59.957 { 00:27:59.957 "dma_device_id": "system", 00:27:59.957 "dma_device_type": 1 00:27:59.957 }, 00:27:59.957 { 00:27:59.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:59.957 "dma_device_type": 2 00:27:59.957 } 00:27:59.957 ], 00:27:59.957 "driver_specific": {} 00:27:59.957 } 00:27:59.957 ] 00:27:59.957 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:27:59.957 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:59.957 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:59.957 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:59.957 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:59.957 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:59.957 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:59.957 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:59.957 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:59.957 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:59.957 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:59.957 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.957 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:00.217 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:00.217 "name": "Existed_Raid", 00:28:00.217 "uuid": "664c4f64-6d0b-4822-8a6e-0b2fc028e299", 00:28:00.217 "strip_size_kb": 0, 00:28:00.217 "state": "configuring", 00:28:00.217 "raid_level": "raid1", 00:28:00.217 "superblock": true, 00:28:00.217 "num_base_bdevs": 2, 00:28:00.217 "num_base_bdevs_discovered": 1, 00:28:00.217 "num_base_bdevs_operational": 2, 00:28:00.217 "base_bdevs_list": [ 00:28:00.217 { 00:28:00.217 "name": "BaseBdev1", 00:28:00.217 "uuid": "95dce135-d9c4-4ab0-a26a-43202ce7f216", 00:28:00.217 "is_configured": true, 00:28:00.217 "data_offset": 256, 00:28:00.217 "data_size": 7936 00:28:00.217 }, 00:28:00.217 { 00:28:00.217 "name": "BaseBdev2", 00:28:00.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.217 "is_configured": false, 00:28:00.217 "data_offset": 0, 00:28:00.217 "data_size": 0 00:28:00.217 } 00:28:00.217 ] 00:28:00.217 }' 00:28:00.217 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:00.217 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:00.787 06:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:00.787 [2024-08-14 06:59:28.028250] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:00.787 [2024-08-14 06:59:28.028338] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:28:01.047 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:01.047 [2024-08-14 06:59:28.251886] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:01.047 [2024-08-14 06:59:28.253842] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:01.047 [2024-08-14 06:59:28.253889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:01.047 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:01.047 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:01.047 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:01.047 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:01.047 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:01.047 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:01.047 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:01.047 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:01.047 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:01.047 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:01.047 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:01.047 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:01.047 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.047 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:01.312 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:01.312 "name": "Existed_Raid", 00:28:01.312 "uuid": "16cedd84-c42b-46c5-81e3-04fa22a714d1", 00:28:01.312 "strip_size_kb": 0, 00:28:01.312 "state": "configuring", 00:28:01.312 "raid_level": "raid1", 00:28:01.312 "superblock": true, 00:28:01.312 "num_base_bdevs": 2, 00:28:01.312 "num_base_bdevs_discovered": 1, 00:28:01.312 "num_base_bdevs_operational": 2, 00:28:01.312 "base_bdevs_list": [ 00:28:01.312 { 00:28:01.312 "name": "BaseBdev1", 00:28:01.312 "uuid": "95dce135-d9c4-4ab0-a26a-43202ce7f216", 00:28:01.312 "is_configured": true, 00:28:01.312 "data_offset": 256, 00:28:01.312 "data_size": 7936 00:28:01.312 }, 00:28:01.312 { 00:28:01.312 "name": "BaseBdev2", 00:28:01.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.312 "is_configured": false, 00:28:01.312 "data_offset": 0, 00:28:01.312 "data_size": 0 00:28:01.312 } 00:28:01.312 ] 00:28:01.312 }' 00:28:01.312 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:01.312 06:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.888 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:28:02.146 [2024-08-14 06:59:29.291308] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:02.147 [2024-08-14 06:59:29.291622] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:28:02.147 [2024-08-14 06:59:29.291647] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:02.147 [2024-08-14 06:59:29.291758] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:28:02.147 [2024-08-14 06:59:29.291852] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:28:02.147 [2024-08-14 06:59:29.291864] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:28:02.147 [2024-08-14 06:59:29.291957] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:02.147 BaseBdev2 00:28:02.147 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:02.147 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:28:02.147 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:02.147 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:28:02.147 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:02.147 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:02.147 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:02.405 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:02.664 [ 00:28:02.664 { 00:28:02.664 "name": "BaseBdev2", 00:28:02.664 "aliases": [ 00:28:02.664 "c888d80d-2d75-4470-aed8-bb2fb0247a96" 00:28:02.664 ], 00:28:02.664 "product_name": "Malloc disk", 00:28:02.664 "block_size": 4128, 00:28:02.664 "num_blocks": 8192, 00:28:02.664 "uuid": "c888d80d-2d75-4470-aed8-bb2fb0247a96", 00:28:02.664 "md_size": 32, 00:28:02.664 "md_interleave": true, 00:28:02.664 "dif_type": 0, 00:28:02.664 "assigned_rate_limits": { 00:28:02.664 "rw_ios_per_sec": 0, 00:28:02.664 "rw_mbytes_per_sec": 0, 00:28:02.664 "r_mbytes_per_sec": 0, 00:28:02.664 "w_mbytes_per_sec": 0 00:28:02.664 }, 00:28:02.664 "claimed": true, 00:28:02.664 "claim_type": "exclusive_write", 00:28:02.664 "zoned": false, 00:28:02.664 "supported_io_types": { 00:28:02.664 "read": true, 00:28:02.664 "write": true, 00:28:02.664 "unmap": true, 00:28:02.664 "flush": true, 00:28:02.664 "reset": true, 00:28:02.664 "nvme_admin": false, 00:28:02.664 "nvme_io": false, 00:28:02.664 "nvme_io_md": false, 00:28:02.664 "write_zeroes": true, 00:28:02.664 "zcopy": true, 00:28:02.664 "get_zone_info": false, 00:28:02.664 "zone_management": false, 00:28:02.664 "zone_append": false, 00:28:02.664 "compare": false, 00:28:02.664 "compare_and_write": false, 00:28:02.664 "abort": true, 00:28:02.664 "seek_hole": false, 00:28:02.664 "seek_data": false, 00:28:02.664 "copy": true, 00:28:02.664 "nvme_iov_md": false 00:28:02.664 }, 00:28:02.664 "memory_domains": [ 00:28:02.664 { 00:28:02.664 "dma_device_id": "system", 00:28:02.664 "dma_device_type": 1 00:28:02.665 }, 00:28:02.665 { 00:28:02.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:02.665 "dma_device_type": 2 00:28:02.665 } 00:28:02.665 ], 00:28:02.665 "driver_specific": {} 00:28:02.665 } 00:28:02.665 ] 00:28:02.665 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:28:02.665 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:02.665 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:02.665 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:28:02.665 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:02.665 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:02.665 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:02.665 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:02.665 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:02.665 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:02.665 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:02.665 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:02.665 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:02.665 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.665 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:02.924 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:02.924 "name": "Existed_Raid", 00:28:02.924 "uuid": "16cedd84-c42b-46c5-81e3-04fa22a714d1", 00:28:02.924 "strip_size_kb": 0, 00:28:02.924 "state": "online", 00:28:02.924 "raid_level": "raid1", 00:28:02.924 "superblock": true, 00:28:02.924 "num_base_bdevs": 2, 00:28:02.924 "num_base_bdevs_discovered": 2, 00:28:02.924 "num_base_bdevs_operational": 2, 00:28:02.924 "base_bdevs_list": [ 00:28:02.924 { 00:28:02.924 "name": "BaseBdev1", 00:28:02.924 "uuid": "95dce135-d9c4-4ab0-a26a-43202ce7f216", 00:28:02.924 "is_configured": true, 00:28:02.924 "data_offset": 256, 00:28:02.924 "data_size": 7936 00:28:02.924 }, 00:28:02.924 { 00:28:02.924 "name": "BaseBdev2", 00:28:02.924 "uuid": "c888d80d-2d75-4470-aed8-bb2fb0247a96", 00:28:02.924 "is_configured": true, 00:28:02.924 "data_offset": 256, 00:28:02.924 "data_size": 7936 00:28:02.924 } 00:28:02.924 ] 00:28:02.924 }' 00:28:02.924 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:02.924 06:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:03.493 06:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:28:03.493 06:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:03.493 06:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:03.493 06:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:03.493 06:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:03.494 06:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:28:03.494 06:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:03.494 06:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:03.753 [2024-08-14 06:59:30.749393] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:03.753 06:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:03.753 "name": "Existed_Raid", 00:28:03.753 "aliases": [ 00:28:03.753 "16cedd84-c42b-46c5-81e3-04fa22a714d1" 00:28:03.753 ], 00:28:03.753 "product_name": "Raid Volume", 00:28:03.753 "block_size": 4128, 00:28:03.753 "num_blocks": 7936, 00:28:03.753 "uuid": "16cedd84-c42b-46c5-81e3-04fa22a714d1", 00:28:03.753 "md_size": 32, 00:28:03.753 "md_interleave": true, 00:28:03.753 "dif_type": 0, 00:28:03.753 "assigned_rate_limits": { 00:28:03.753 "rw_ios_per_sec": 0, 00:28:03.753 "rw_mbytes_per_sec": 0, 00:28:03.753 "r_mbytes_per_sec": 0, 00:28:03.753 "w_mbytes_per_sec": 0 00:28:03.753 }, 00:28:03.753 "claimed": false, 00:28:03.753 "zoned": false, 00:28:03.753 "supported_io_types": { 00:28:03.753 "read": true, 00:28:03.753 "write": true, 00:28:03.753 "unmap": false, 00:28:03.753 "flush": false, 00:28:03.753 "reset": true, 00:28:03.753 "nvme_admin": false, 00:28:03.753 "nvme_io": false, 00:28:03.753 "nvme_io_md": false, 00:28:03.753 "write_zeroes": true, 00:28:03.753 "zcopy": false, 00:28:03.753 "get_zone_info": false, 00:28:03.753 "zone_management": false, 00:28:03.753 "zone_append": false, 00:28:03.753 "compare": false, 00:28:03.753 "compare_and_write": false, 00:28:03.753 "abort": false, 00:28:03.753 "seek_hole": false, 00:28:03.753 "seek_data": false, 00:28:03.753 "copy": false, 00:28:03.753 "nvme_iov_md": false 00:28:03.753 }, 00:28:03.753 "memory_domains": [ 00:28:03.753 { 00:28:03.753 "dma_device_id": "system", 00:28:03.753 "dma_device_type": 1 00:28:03.753 }, 00:28:03.753 { 00:28:03.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:03.753 "dma_device_type": 2 00:28:03.753 }, 00:28:03.753 { 00:28:03.753 "dma_device_id": "system", 00:28:03.753 "dma_device_type": 1 00:28:03.753 }, 00:28:03.753 { 00:28:03.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:03.753 "dma_device_type": 2 00:28:03.753 } 00:28:03.753 ], 00:28:03.753 "driver_specific": { 00:28:03.753 "raid": { 00:28:03.753 "uuid": "16cedd84-c42b-46c5-81e3-04fa22a714d1", 00:28:03.753 "strip_size_kb": 0, 00:28:03.753 "state": "online", 00:28:03.753 "raid_level": "raid1", 00:28:03.753 "superblock": true, 00:28:03.753 "num_base_bdevs": 2, 00:28:03.753 "num_base_bdevs_discovered": 2, 00:28:03.754 "num_base_bdevs_operational": 2, 00:28:03.754 "base_bdevs_list": [ 00:28:03.754 { 00:28:03.754 "name": "BaseBdev1", 00:28:03.754 "uuid": "95dce135-d9c4-4ab0-a26a-43202ce7f216", 00:28:03.754 "is_configured": true, 00:28:03.754 "data_offset": 256, 00:28:03.754 "data_size": 7936 00:28:03.754 }, 00:28:03.754 { 00:28:03.754 "name": "BaseBdev2", 00:28:03.754 "uuid": "c888d80d-2d75-4470-aed8-bb2fb0247a96", 00:28:03.754 "is_configured": true, 00:28:03.754 "data_offset": 256, 00:28:03.754 "data_size": 7936 00:28:03.754 } 00:28:03.754 ] 00:28:03.754 } 00:28:03.754 } 00:28:03.754 }' 00:28:03.754 06:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:03.754 06:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:28:03.754 BaseBdev2' 00:28:03.754 06:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:03.754 06:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:03.754 06:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:04.013 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:04.014 "name": "BaseBdev1", 00:28:04.014 "aliases": [ 00:28:04.014 "95dce135-d9c4-4ab0-a26a-43202ce7f216" 00:28:04.014 ], 00:28:04.014 "product_name": "Malloc disk", 00:28:04.014 "block_size": 4128, 00:28:04.014 "num_blocks": 8192, 00:28:04.014 "uuid": "95dce135-d9c4-4ab0-a26a-43202ce7f216", 00:28:04.014 "md_size": 32, 00:28:04.014 "md_interleave": true, 00:28:04.014 "dif_type": 0, 00:28:04.014 "assigned_rate_limits": { 00:28:04.014 "rw_ios_per_sec": 0, 00:28:04.014 "rw_mbytes_per_sec": 0, 00:28:04.014 "r_mbytes_per_sec": 0, 00:28:04.014 "w_mbytes_per_sec": 0 00:28:04.014 }, 00:28:04.014 "claimed": true, 00:28:04.014 "claim_type": "exclusive_write", 00:28:04.014 "zoned": false, 00:28:04.014 "supported_io_types": { 00:28:04.014 "read": true, 00:28:04.014 "write": true, 00:28:04.014 "unmap": true, 00:28:04.014 "flush": true, 00:28:04.014 "reset": true, 00:28:04.014 "nvme_admin": false, 00:28:04.014 "nvme_io": false, 00:28:04.014 "nvme_io_md": false, 00:28:04.014 "write_zeroes": true, 00:28:04.014 "zcopy": true, 00:28:04.014 "get_zone_info": false, 00:28:04.014 "zone_management": false, 00:28:04.014 "zone_append": false, 00:28:04.014 "compare": false, 00:28:04.014 "compare_and_write": false, 00:28:04.014 "abort": true, 00:28:04.014 "seek_hole": false, 00:28:04.014 "seek_data": false, 00:28:04.014 "copy": true, 00:28:04.014 "nvme_iov_md": false 00:28:04.014 }, 00:28:04.014 "memory_domains": [ 00:28:04.014 { 00:28:04.014 "dma_device_id": "system", 00:28:04.014 "dma_device_type": 1 00:28:04.014 }, 00:28:04.014 { 00:28:04.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:04.014 "dma_device_type": 2 00:28:04.014 } 00:28:04.014 ], 00:28:04.014 "driver_specific": {} 00:28:04.014 }' 00:28:04.014 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:04.014 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:04.014 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:04.014 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:04.014 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:04.014 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:04.014 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:04.273 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:04.273 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:04.273 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:04.273 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:04.273 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:04.273 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:04.273 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:04.273 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:04.533 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:04.533 "name": "BaseBdev2", 00:28:04.533 "aliases": [ 00:28:04.533 "c888d80d-2d75-4470-aed8-bb2fb0247a96" 00:28:04.533 ], 00:28:04.533 "product_name": "Malloc disk", 00:28:04.533 "block_size": 4128, 00:28:04.533 "num_blocks": 8192, 00:28:04.533 "uuid": "c888d80d-2d75-4470-aed8-bb2fb0247a96", 00:28:04.533 "md_size": 32, 00:28:04.533 "md_interleave": true, 00:28:04.533 "dif_type": 0, 00:28:04.533 "assigned_rate_limits": { 00:28:04.533 "rw_ios_per_sec": 0, 00:28:04.533 "rw_mbytes_per_sec": 0, 00:28:04.533 "r_mbytes_per_sec": 0, 00:28:04.533 "w_mbytes_per_sec": 0 00:28:04.533 }, 00:28:04.533 "claimed": true, 00:28:04.533 "claim_type": "exclusive_write", 00:28:04.533 "zoned": false, 00:28:04.533 "supported_io_types": { 00:28:04.533 "read": true, 00:28:04.533 "write": true, 00:28:04.533 "unmap": true, 00:28:04.533 "flush": true, 00:28:04.533 "reset": true, 00:28:04.533 "nvme_admin": false, 00:28:04.533 "nvme_io": false, 00:28:04.533 "nvme_io_md": false, 00:28:04.533 "write_zeroes": true, 00:28:04.533 "zcopy": true, 00:28:04.533 "get_zone_info": false, 00:28:04.533 "zone_management": false, 00:28:04.533 "zone_append": false, 00:28:04.533 "compare": false, 00:28:04.533 "compare_and_write": false, 00:28:04.533 "abort": true, 00:28:04.533 "seek_hole": false, 00:28:04.533 "seek_data": false, 00:28:04.533 "copy": true, 00:28:04.533 "nvme_iov_md": false 00:28:04.533 }, 00:28:04.533 "memory_domains": [ 00:28:04.533 { 00:28:04.533 "dma_device_id": "system", 00:28:04.533 "dma_device_type": 1 00:28:04.533 }, 00:28:04.533 { 00:28:04.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:04.533 "dma_device_type": 2 00:28:04.533 } 00:28:04.533 ], 00:28:04.533 "driver_specific": {} 00:28:04.533 }' 00:28:04.533 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:04.533 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:04.533 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:04.533 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:04.793 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:04.793 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:04.793 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:04.793 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:04.793 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:04.793 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:04.793 06:59:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:04.793 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:04.793 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:05.053 [2024-08-14 06:59:32.214673] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.053 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.313 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:05.313 "name": "Existed_Raid", 00:28:05.313 "uuid": "16cedd84-c42b-46c5-81e3-04fa22a714d1", 00:28:05.313 "strip_size_kb": 0, 00:28:05.313 "state": "online", 00:28:05.313 "raid_level": "raid1", 00:28:05.313 "superblock": true, 00:28:05.313 "num_base_bdevs": 2, 00:28:05.313 "num_base_bdevs_discovered": 1, 00:28:05.313 "num_base_bdevs_operational": 1, 00:28:05.313 "base_bdevs_list": [ 00:28:05.313 { 00:28:05.313 "name": null, 00:28:05.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.313 "is_configured": false, 00:28:05.313 "data_offset": 256, 00:28:05.313 "data_size": 7936 00:28:05.313 }, 00:28:05.313 { 00:28:05.313 "name": "BaseBdev2", 00:28:05.313 "uuid": "c888d80d-2d75-4470-aed8-bb2fb0247a96", 00:28:05.313 "is_configured": true, 00:28:05.313 "data_offset": 256, 00:28:05.313 "data_size": 7936 00:28:05.313 } 00:28:05.313 ] 00:28:05.313 }' 00:28:05.313 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:05.313 06:59:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.883 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:05.883 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:05.883 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:05.883 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.142 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:06.142 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:06.142 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:06.402 [2024-08-14 06:59:33.480728] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:06.402 [2024-08-14 06:59:33.480950] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:06.402 [2024-08-14 06:59:33.493255] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:06.402 [2024-08-14 06:59:33.493388] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:06.402 [2024-08-14 06:59:33.493434] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:28:06.402 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:06.402 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:06.402 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:06.402 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.728 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:06.728 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:06.728 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:28:06.728 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 109487 00:28:06.728 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 109487 ']' 00:28:06.728 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 109487 00:28:06.728 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:28:06.728 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:06.728 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 109487 00:28:06.728 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:06.728 killing process with pid 109487 00:28:06.728 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:06.728 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 109487' 00:28:06.728 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 109487 00:28:06.728 [2024-08-14 06:59:33.788371] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:06.728 06:59:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 109487 00:28:06.728 [2024-08-14 06:59:33.789428] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:06.987 06:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:28:06.987 00:28:06.987 real 0m9.973s 00:28:06.987 user 0m17.840s 00:28:06.987 sys 0m1.569s 00:28:06.987 06:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:06.987 ************************************ 00:28:06.987 END TEST raid_state_function_test_sb_md_interleaved 00:28:06.987 ************************************ 00:28:06.987 06:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:06.987 06:59:34 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:28:06.987 06:59:34 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:28:06.987 06:59:34 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:06.988 06:59:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:06.988 ************************************ 00:28:06.988 START TEST raid_superblock_test_md_interleaved 00:28:06.988 ************************************ 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@414 -- # local strip_size 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@427 -- # raid_pid=109822 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@428 -- # waitforlisten 109822 /var/tmp/spdk-raid.sock 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 109822 ']' 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:06.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:06.988 06:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:06.988 [2024-08-14 06:59:34.192067] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:28:06.988 [2024-08-14 06:59:34.192319] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109822 ] 00:28:07.247 [2024-08-14 06:59:34.337320] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.247 [2024-08-14 06:59:34.388714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.247 [2024-08-14 06:59:34.432014] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:07.247 [2024-08-14 06:59:34.432057] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:07.815 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:07.815 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:28:07.815 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:28:07.815 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:07.815 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:28:07.815 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:28:07.815 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:07.815 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:07.815 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:07.815 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:07.815 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:28:08.074 malloc1 00:28:08.075 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:08.334 [2024-08-14 06:59:35.449397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:08.334 [2024-08-14 06:59:35.449548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.334 [2024-08-14 06:59:35.449610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:28:08.334 [2024-08-14 06:59:35.449663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.334 [2024-08-14 06:59:35.451779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.334 [2024-08-14 06:59:35.451866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:08.334 pt1 00:28:08.334 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:08.334 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:08.334 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:28:08.334 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:28:08.334 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:08.334 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:08.334 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:08.334 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:08.334 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:28:08.593 malloc2 00:28:08.593 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:08.852 [2024-08-14 06:59:35.899459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:08.852 [2024-08-14 06:59:35.899655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.852 [2024-08-14 06:59:35.899696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:08.852 [2024-08-14 06:59:35.899728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.852 [2024-08-14 06:59:35.901815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.852 [2024-08-14 06:59:35.901896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:08.852 pt2 00:28:08.852 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:08.852 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:08.852 06:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:28:09.110 [2024-08-14 06:59:36.107164] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:09.110 [2024-08-14 06:59:36.109136] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:09.110 [2024-08-14 06:59:36.109421] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:28:09.110 [2024-08-14 06:59:36.109475] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:09.111 [2024-08-14 06:59:36.109624] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:28:09.111 [2024-08-14 06:59:36.109747] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:28:09.111 [2024-08-14 06:59:36.109794] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:28:09.111 [2024-08-14 06:59:36.109926] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:09.111 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:09.111 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:09.111 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:09.111 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:09.111 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:09.111 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:09.111 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:09.111 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:09.111 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:09.111 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:09.111 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.111 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.111 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:09.111 "name": "raid_bdev1", 00:28:09.111 "uuid": "96fce678-2b5e-4277-be32-b2bf5cc5023f", 00:28:09.111 "strip_size_kb": 0, 00:28:09.111 "state": "online", 00:28:09.111 "raid_level": "raid1", 00:28:09.111 "superblock": true, 00:28:09.111 "num_base_bdevs": 2, 00:28:09.111 "num_base_bdevs_discovered": 2, 00:28:09.111 "num_base_bdevs_operational": 2, 00:28:09.111 "base_bdevs_list": [ 00:28:09.111 { 00:28:09.111 "name": "pt1", 00:28:09.111 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:09.111 "is_configured": true, 00:28:09.111 "data_offset": 256, 00:28:09.111 "data_size": 7936 00:28:09.111 }, 00:28:09.111 { 00:28:09.111 "name": "pt2", 00:28:09.111 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:09.111 "is_configured": true, 00:28:09.111 "data_offset": 256, 00:28:09.111 "data_size": 7936 00:28:09.111 } 00:28:09.111 ] 00:28:09.111 }' 00:28:09.111 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:09.111 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.678 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:28:09.678 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:09.678 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:09.678 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:09.678 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:09.678 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:28:09.678 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:09.678 06:59:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:09.937 [2024-08-14 06:59:37.097748] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:09.937 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:09.937 "name": "raid_bdev1", 00:28:09.937 "aliases": [ 00:28:09.937 "96fce678-2b5e-4277-be32-b2bf5cc5023f" 00:28:09.937 ], 00:28:09.937 "product_name": "Raid Volume", 00:28:09.937 "block_size": 4128, 00:28:09.937 "num_blocks": 7936, 00:28:09.937 "uuid": "96fce678-2b5e-4277-be32-b2bf5cc5023f", 00:28:09.937 "md_size": 32, 00:28:09.937 "md_interleave": true, 00:28:09.937 "dif_type": 0, 00:28:09.937 "assigned_rate_limits": { 00:28:09.937 "rw_ios_per_sec": 0, 00:28:09.937 "rw_mbytes_per_sec": 0, 00:28:09.937 "r_mbytes_per_sec": 0, 00:28:09.937 "w_mbytes_per_sec": 0 00:28:09.937 }, 00:28:09.937 "claimed": false, 00:28:09.937 "zoned": false, 00:28:09.937 "supported_io_types": { 00:28:09.937 "read": true, 00:28:09.937 "write": true, 00:28:09.937 "unmap": false, 00:28:09.937 "flush": false, 00:28:09.937 "reset": true, 00:28:09.937 "nvme_admin": false, 00:28:09.937 "nvme_io": false, 00:28:09.937 "nvme_io_md": false, 00:28:09.937 "write_zeroes": true, 00:28:09.937 "zcopy": false, 00:28:09.937 "get_zone_info": false, 00:28:09.937 "zone_management": false, 00:28:09.937 "zone_append": false, 00:28:09.937 "compare": false, 00:28:09.937 "compare_and_write": false, 00:28:09.937 "abort": false, 00:28:09.937 "seek_hole": false, 00:28:09.937 "seek_data": false, 00:28:09.937 "copy": false, 00:28:09.937 "nvme_iov_md": false 00:28:09.937 }, 00:28:09.937 "memory_domains": [ 00:28:09.937 { 00:28:09.937 "dma_device_id": "system", 00:28:09.937 "dma_device_type": 1 00:28:09.937 }, 00:28:09.937 { 00:28:09.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.937 "dma_device_type": 2 00:28:09.937 }, 00:28:09.937 { 00:28:09.937 "dma_device_id": "system", 00:28:09.937 "dma_device_type": 1 00:28:09.937 }, 00:28:09.937 { 00:28:09.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.937 "dma_device_type": 2 00:28:09.937 } 00:28:09.937 ], 00:28:09.937 "driver_specific": { 00:28:09.937 "raid": { 00:28:09.937 "uuid": "96fce678-2b5e-4277-be32-b2bf5cc5023f", 00:28:09.937 "strip_size_kb": 0, 00:28:09.937 "state": "online", 00:28:09.937 "raid_level": "raid1", 00:28:09.937 "superblock": true, 00:28:09.937 "num_base_bdevs": 2, 00:28:09.937 "num_base_bdevs_discovered": 2, 00:28:09.937 "num_base_bdevs_operational": 2, 00:28:09.937 "base_bdevs_list": [ 00:28:09.937 { 00:28:09.937 "name": "pt1", 00:28:09.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:09.937 "is_configured": true, 00:28:09.937 "data_offset": 256, 00:28:09.937 "data_size": 7936 00:28:09.937 }, 00:28:09.937 { 00:28:09.937 "name": "pt2", 00:28:09.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:09.937 "is_configured": true, 00:28:09.937 "data_offset": 256, 00:28:09.937 "data_size": 7936 00:28:09.937 } 00:28:09.937 ] 00:28:09.937 } 00:28:09.937 } 00:28:09.937 }' 00:28:09.937 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:09.937 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:09.937 pt2' 00:28:09.937 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:09.937 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:09.937 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:10.196 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:10.196 "name": "pt1", 00:28:10.196 "aliases": [ 00:28:10.196 "00000000-0000-0000-0000-000000000001" 00:28:10.196 ], 00:28:10.196 "product_name": "passthru", 00:28:10.196 "block_size": 4128, 00:28:10.196 "num_blocks": 8192, 00:28:10.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:10.196 "md_size": 32, 00:28:10.196 "md_interleave": true, 00:28:10.196 "dif_type": 0, 00:28:10.196 "assigned_rate_limits": { 00:28:10.196 "rw_ios_per_sec": 0, 00:28:10.196 "rw_mbytes_per_sec": 0, 00:28:10.196 "r_mbytes_per_sec": 0, 00:28:10.196 "w_mbytes_per_sec": 0 00:28:10.196 }, 00:28:10.196 "claimed": true, 00:28:10.196 "claim_type": "exclusive_write", 00:28:10.196 "zoned": false, 00:28:10.196 "supported_io_types": { 00:28:10.196 "read": true, 00:28:10.196 "write": true, 00:28:10.196 "unmap": true, 00:28:10.196 "flush": true, 00:28:10.196 "reset": true, 00:28:10.196 "nvme_admin": false, 00:28:10.196 "nvme_io": false, 00:28:10.196 "nvme_io_md": false, 00:28:10.196 "write_zeroes": true, 00:28:10.196 "zcopy": true, 00:28:10.196 "get_zone_info": false, 00:28:10.196 "zone_management": false, 00:28:10.196 "zone_append": false, 00:28:10.196 "compare": false, 00:28:10.196 "compare_and_write": false, 00:28:10.196 "abort": true, 00:28:10.196 "seek_hole": false, 00:28:10.196 "seek_data": false, 00:28:10.196 "copy": true, 00:28:10.196 "nvme_iov_md": false 00:28:10.196 }, 00:28:10.196 "memory_domains": [ 00:28:10.196 { 00:28:10.196 "dma_device_id": "system", 00:28:10.196 "dma_device_type": 1 00:28:10.196 }, 00:28:10.196 { 00:28:10.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.196 "dma_device_type": 2 00:28:10.196 } 00:28:10.196 ], 00:28:10.196 "driver_specific": { 00:28:10.196 "passthru": { 00:28:10.196 "name": "pt1", 00:28:10.196 "base_bdev_name": "malloc1" 00:28:10.196 } 00:28:10.196 } 00:28:10.196 }' 00:28:10.196 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.196 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.455 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:10.455 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:10.455 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:10.455 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:10.455 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:10.455 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:10.455 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:10.455 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:10.455 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:10.455 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:10.455 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:10.455 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:10.455 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:10.714 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:10.714 "name": "pt2", 00:28:10.714 "aliases": [ 00:28:10.714 "00000000-0000-0000-0000-000000000002" 00:28:10.714 ], 00:28:10.714 "product_name": "passthru", 00:28:10.714 "block_size": 4128, 00:28:10.714 "num_blocks": 8192, 00:28:10.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:10.714 "md_size": 32, 00:28:10.714 "md_interleave": true, 00:28:10.714 "dif_type": 0, 00:28:10.714 "assigned_rate_limits": { 00:28:10.714 "rw_ios_per_sec": 0, 00:28:10.714 "rw_mbytes_per_sec": 0, 00:28:10.714 "r_mbytes_per_sec": 0, 00:28:10.714 "w_mbytes_per_sec": 0 00:28:10.714 }, 00:28:10.714 "claimed": true, 00:28:10.714 "claim_type": "exclusive_write", 00:28:10.714 "zoned": false, 00:28:10.714 "supported_io_types": { 00:28:10.714 "read": true, 00:28:10.714 "write": true, 00:28:10.714 "unmap": true, 00:28:10.714 "flush": true, 00:28:10.714 "reset": true, 00:28:10.714 "nvme_admin": false, 00:28:10.714 "nvme_io": false, 00:28:10.714 "nvme_io_md": false, 00:28:10.714 "write_zeroes": true, 00:28:10.714 "zcopy": true, 00:28:10.714 "get_zone_info": false, 00:28:10.714 "zone_management": false, 00:28:10.714 "zone_append": false, 00:28:10.714 "compare": false, 00:28:10.714 "compare_and_write": false, 00:28:10.714 "abort": true, 00:28:10.714 "seek_hole": false, 00:28:10.714 "seek_data": false, 00:28:10.714 "copy": true, 00:28:10.714 "nvme_iov_md": false 00:28:10.714 }, 00:28:10.714 "memory_domains": [ 00:28:10.714 { 00:28:10.714 "dma_device_id": "system", 00:28:10.714 "dma_device_type": 1 00:28:10.714 }, 00:28:10.714 { 00:28:10.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.714 "dma_device_type": 2 00:28:10.714 } 00:28:10.714 ], 00:28:10.714 "driver_specific": { 00:28:10.714 "passthru": { 00:28:10.714 "name": "pt2", 00:28:10.714 "base_bdev_name": "malloc2" 00:28:10.714 } 00:28:10.714 } 00:28:10.714 }' 00:28:10.714 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.714 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.974 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:10.974 06:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:10.974 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:10.974 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:10.974 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:10.974 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:10.974 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:10.974 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:10.974 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:11.233 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:11.233 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:11.233 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:28:11.233 [2024-08-14 06:59:38.471432] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:11.493 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=96fce678-2b5e-4277-be32-b2bf5cc5023f 00:28:11.493 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' -z 96fce678-2b5e-4277-be32-b2bf5cc5023f ']' 00:28:11.493 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:11.493 [2024-08-14 06:59:38.682761] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:11.493 [2024-08-14 06:59:38.682800] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:11.493 [2024-08-14 06:59:38.682904] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:11.493 [2024-08-14 06:59:38.682976] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:11.493 [2024-08-14 06:59:38.682989] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:28:11.493 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.493 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:28:11.751 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:28:11.752 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:28:11.752 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:28:11.752 06:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:12.011 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:28:12.011 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:12.270 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:28:12.270 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:12.529 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:28:12.529 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:12.529 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@646 -- # local es=0 00:28:12.529 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:12.529 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:12.529 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:28:12.529 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:12.529 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:28:12.529 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:12.529 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:28:12.529 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:12.529 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:12.529 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:12.789 [2024-08-14 06:59:39.796899] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:12.789 [2024-08-14 06:59:39.798882] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:12.789 [2024-08-14 06:59:39.799004] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:12.789 [2024-08-14 06:59:39.799109] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:12.789 [2024-08-14 06:59:39.799231] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:12.789 [2024-08-14 06:59:39.799285] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:28:12.789 request: 00:28:12.789 { 00:28:12.789 "name": "raid_bdev1", 00:28:12.789 "raid_level": "raid1", 00:28:12.789 "base_bdevs": [ 00:28:12.789 "malloc1", 00:28:12.789 "malloc2" 00:28:12.789 ], 00:28:12.789 "superblock": false, 00:28:12.789 "method": "bdev_raid_create", 00:28:12.789 "req_id": 1 00:28:12.789 } 00:28:12.789 Got JSON-RPC error response 00:28:12.789 response: 00:28:12.789 { 00:28:12.789 "code": -17, 00:28:12.789 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:12.789 } 00:28:12.789 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@649 -- # es=1 00:28:12.789 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:28:12.789 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:28:12.789 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:28:12.789 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.789 06:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:28:12.789 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:28:12.789 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:28:12.789 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:13.048 [2024-08-14 06:59:40.228088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:13.048 [2024-08-14 06:59:40.228261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:13.048 [2024-08-14 06:59:40.228287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:13.048 [2024-08-14 06:59:40.228300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:13.048 [2024-08-14 06:59:40.230418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:13.048 pt1 00:28:13.048 [2024-08-14 06:59:40.230534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:13.048 [2024-08-14 06:59:40.230614] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:13.048 [2024-08-14 06:59:40.230681] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:13.048 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:28:13.048 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:13.048 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:13.048 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:13.048 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:13.048 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:13.048 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:13.048 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:13.048 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:13.048 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:13.048 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.048 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.307 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:13.307 "name": "raid_bdev1", 00:28:13.307 "uuid": "96fce678-2b5e-4277-be32-b2bf5cc5023f", 00:28:13.307 "strip_size_kb": 0, 00:28:13.307 "state": "configuring", 00:28:13.307 "raid_level": "raid1", 00:28:13.307 "superblock": true, 00:28:13.307 "num_base_bdevs": 2, 00:28:13.307 "num_base_bdevs_discovered": 1, 00:28:13.307 "num_base_bdevs_operational": 2, 00:28:13.307 "base_bdevs_list": [ 00:28:13.307 { 00:28:13.307 "name": "pt1", 00:28:13.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:13.307 "is_configured": true, 00:28:13.307 "data_offset": 256, 00:28:13.307 "data_size": 7936 00:28:13.307 }, 00:28:13.307 { 00:28:13.307 "name": null, 00:28:13.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:13.307 "is_configured": false, 00:28:13.307 "data_offset": 256, 00:28:13.307 "data_size": 7936 00:28:13.307 } 00:28:13.307 ] 00:28:13.307 }' 00:28:13.307 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:13.307 06:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:13.876 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:28:13.876 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:28:13.876 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:28:13.876 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:14.135 [2024-08-14 06:59:41.298290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:14.135 [2024-08-14 06:59:41.298458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:14.135 [2024-08-14 06:59:41.298497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:14.135 [2024-08-14 06:59:41.298528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:14.135 [2024-08-14 06:59:41.298756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:14.135 [2024-08-14 06:59:41.298812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:14.135 [2024-08-14 06:59:41.298898] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:14.135 [2024-08-14 06:59:41.298961] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:14.135 [2024-08-14 06:59:41.299101] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:28:14.135 [2024-08-14 06:59:41.299150] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:14.135 [2024-08-14 06:59:41.299278] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:28:14.135 [2024-08-14 06:59:41.299384] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:28:14.135 [2024-08-14 06:59:41.299421] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:28:14.135 [2024-08-14 06:59:41.299554] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:14.135 pt2 00:28:14.135 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:28:14.135 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:28:14.135 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:14.135 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:14.135 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:14.135 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:14.135 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:14.135 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:14.135 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:14.135 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:14.135 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:14.135 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:14.135 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.135 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.395 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:14.395 "name": "raid_bdev1", 00:28:14.395 "uuid": "96fce678-2b5e-4277-be32-b2bf5cc5023f", 00:28:14.395 "strip_size_kb": 0, 00:28:14.395 "state": "online", 00:28:14.395 "raid_level": "raid1", 00:28:14.395 "superblock": true, 00:28:14.395 "num_base_bdevs": 2, 00:28:14.395 "num_base_bdevs_discovered": 2, 00:28:14.395 "num_base_bdevs_operational": 2, 00:28:14.395 "base_bdevs_list": [ 00:28:14.395 { 00:28:14.395 "name": "pt1", 00:28:14.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:14.395 "is_configured": true, 00:28:14.395 "data_offset": 256, 00:28:14.395 "data_size": 7936 00:28:14.395 }, 00:28:14.395 { 00:28:14.395 "name": "pt2", 00:28:14.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:14.395 "is_configured": true, 00:28:14.395 "data_offset": 256, 00:28:14.395 "data_size": 7936 00:28:14.395 } 00:28:14.395 ] 00:28:14.395 }' 00:28:14.395 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:14.395 06:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.965 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:28:14.965 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:14.965 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:14.965 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:14.965 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:14.965 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:28:14.965 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:14.965 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:15.225 [2024-08-14 06:59:42.408872] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:15.225 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:15.225 "name": "raid_bdev1", 00:28:15.225 "aliases": [ 00:28:15.225 "96fce678-2b5e-4277-be32-b2bf5cc5023f" 00:28:15.225 ], 00:28:15.225 "product_name": "Raid Volume", 00:28:15.225 "block_size": 4128, 00:28:15.225 "num_blocks": 7936, 00:28:15.225 "uuid": "96fce678-2b5e-4277-be32-b2bf5cc5023f", 00:28:15.225 "md_size": 32, 00:28:15.225 "md_interleave": true, 00:28:15.225 "dif_type": 0, 00:28:15.225 "assigned_rate_limits": { 00:28:15.225 "rw_ios_per_sec": 0, 00:28:15.225 "rw_mbytes_per_sec": 0, 00:28:15.225 "r_mbytes_per_sec": 0, 00:28:15.225 "w_mbytes_per_sec": 0 00:28:15.225 }, 00:28:15.225 "claimed": false, 00:28:15.225 "zoned": false, 00:28:15.225 "supported_io_types": { 00:28:15.225 "read": true, 00:28:15.225 "write": true, 00:28:15.225 "unmap": false, 00:28:15.225 "flush": false, 00:28:15.225 "reset": true, 00:28:15.225 "nvme_admin": false, 00:28:15.225 "nvme_io": false, 00:28:15.225 "nvme_io_md": false, 00:28:15.225 "write_zeroes": true, 00:28:15.225 "zcopy": false, 00:28:15.225 "get_zone_info": false, 00:28:15.225 "zone_management": false, 00:28:15.225 "zone_append": false, 00:28:15.225 "compare": false, 00:28:15.225 "compare_and_write": false, 00:28:15.225 "abort": false, 00:28:15.225 "seek_hole": false, 00:28:15.225 "seek_data": false, 00:28:15.225 "copy": false, 00:28:15.225 "nvme_iov_md": false 00:28:15.225 }, 00:28:15.225 "memory_domains": [ 00:28:15.225 { 00:28:15.225 "dma_device_id": "system", 00:28:15.225 "dma_device_type": 1 00:28:15.225 }, 00:28:15.225 { 00:28:15.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.225 "dma_device_type": 2 00:28:15.225 }, 00:28:15.225 { 00:28:15.225 "dma_device_id": "system", 00:28:15.225 "dma_device_type": 1 00:28:15.225 }, 00:28:15.225 { 00:28:15.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.225 "dma_device_type": 2 00:28:15.225 } 00:28:15.225 ], 00:28:15.225 "driver_specific": { 00:28:15.225 "raid": { 00:28:15.225 "uuid": "96fce678-2b5e-4277-be32-b2bf5cc5023f", 00:28:15.225 "strip_size_kb": 0, 00:28:15.225 "state": "online", 00:28:15.225 "raid_level": "raid1", 00:28:15.225 "superblock": true, 00:28:15.225 "num_base_bdevs": 2, 00:28:15.225 "num_base_bdevs_discovered": 2, 00:28:15.225 "num_base_bdevs_operational": 2, 00:28:15.225 "base_bdevs_list": [ 00:28:15.225 { 00:28:15.225 "name": "pt1", 00:28:15.225 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:15.225 "is_configured": true, 00:28:15.225 "data_offset": 256, 00:28:15.225 "data_size": 7936 00:28:15.225 }, 00:28:15.225 { 00:28:15.225 "name": "pt2", 00:28:15.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:15.225 "is_configured": true, 00:28:15.226 "data_offset": 256, 00:28:15.226 "data_size": 7936 00:28:15.226 } 00:28:15.226 ] 00:28:15.226 } 00:28:15.226 } 00:28:15.226 }' 00:28:15.226 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:15.226 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:15.226 pt2' 00:28:15.226 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:15.486 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:15.486 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:15.486 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:15.486 "name": "pt1", 00:28:15.486 "aliases": [ 00:28:15.486 "00000000-0000-0000-0000-000000000001" 00:28:15.486 ], 00:28:15.486 "product_name": "passthru", 00:28:15.486 "block_size": 4128, 00:28:15.486 "num_blocks": 8192, 00:28:15.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:15.486 "md_size": 32, 00:28:15.486 "md_interleave": true, 00:28:15.486 "dif_type": 0, 00:28:15.486 "assigned_rate_limits": { 00:28:15.486 "rw_ios_per_sec": 0, 00:28:15.486 "rw_mbytes_per_sec": 0, 00:28:15.486 "r_mbytes_per_sec": 0, 00:28:15.486 "w_mbytes_per_sec": 0 00:28:15.486 }, 00:28:15.486 "claimed": true, 00:28:15.486 "claim_type": "exclusive_write", 00:28:15.486 "zoned": false, 00:28:15.486 "supported_io_types": { 00:28:15.486 "read": true, 00:28:15.486 "write": true, 00:28:15.486 "unmap": true, 00:28:15.486 "flush": true, 00:28:15.486 "reset": true, 00:28:15.486 "nvme_admin": false, 00:28:15.486 "nvme_io": false, 00:28:15.486 "nvme_io_md": false, 00:28:15.486 "write_zeroes": true, 00:28:15.486 "zcopy": true, 00:28:15.486 "get_zone_info": false, 00:28:15.486 "zone_management": false, 00:28:15.486 "zone_append": false, 00:28:15.486 "compare": false, 00:28:15.486 "compare_and_write": false, 00:28:15.486 "abort": true, 00:28:15.486 "seek_hole": false, 00:28:15.486 "seek_data": false, 00:28:15.486 "copy": true, 00:28:15.486 "nvme_iov_md": false 00:28:15.486 }, 00:28:15.486 "memory_domains": [ 00:28:15.486 { 00:28:15.486 "dma_device_id": "system", 00:28:15.486 "dma_device_type": 1 00:28:15.486 }, 00:28:15.486 { 00:28:15.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.486 "dma_device_type": 2 00:28:15.486 } 00:28:15.486 ], 00:28:15.486 "driver_specific": { 00:28:15.486 "passthru": { 00:28:15.486 "name": "pt1", 00:28:15.486 "base_bdev_name": "malloc1" 00:28:15.486 } 00:28:15.486 } 00:28:15.486 }' 00:28:15.486 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:15.486 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:15.745 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:15.745 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:15.745 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:15.745 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:15.745 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:15.745 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:15.745 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:15.745 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:15.745 06:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:16.005 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:16.005 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:16.005 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:16.005 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:16.005 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:16.005 "name": "pt2", 00:28:16.005 "aliases": [ 00:28:16.005 "00000000-0000-0000-0000-000000000002" 00:28:16.005 ], 00:28:16.005 "product_name": "passthru", 00:28:16.005 "block_size": 4128, 00:28:16.005 "num_blocks": 8192, 00:28:16.005 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:16.005 "md_size": 32, 00:28:16.005 "md_interleave": true, 00:28:16.005 "dif_type": 0, 00:28:16.005 "assigned_rate_limits": { 00:28:16.005 "rw_ios_per_sec": 0, 00:28:16.005 "rw_mbytes_per_sec": 0, 00:28:16.005 "r_mbytes_per_sec": 0, 00:28:16.005 "w_mbytes_per_sec": 0 00:28:16.005 }, 00:28:16.005 "claimed": true, 00:28:16.005 "claim_type": "exclusive_write", 00:28:16.005 "zoned": false, 00:28:16.005 "supported_io_types": { 00:28:16.005 "read": true, 00:28:16.005 "write": true, 00:28:16.005 "unmap": true, 00:28:16.005 "flush": true, 00:28:16.005 "reset": true, 00:28:16.005 "nvme_admin": false, 00:28:16.005 "nvme_io": false, 00:28:16.005 "nvme_io_md": false, 00:28:16.005 "write_zeroes": true, 00:28:16.005 "zcopy": true, 00:28:16.005 "get_zone_info": false, 00:28:16.005 "zone_management": false, 00:28:16.005 "zone_append": false, 00:28:16.005 "compare": false, 00:28:16.005 "compare_and_write": false, 00:28:16.005 "abort": true, 00:28:16.005 "seek_hole": false, 00:28:16.005 "seek_data": false, 00:28:16.005 "copy": true, 00:28:16.005 "nvme_iov_md": false 00:28:16.005 }, 00:28:16.005 "memory_domains": [ 00:28:16.005 { 00:28:16.005 "dma_device_id": "system", 00:28:16.005 "dma_device_type": 1 00:28:16.005 }, 00:28:16.005 { 00:28:16.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:16.005 "dma_device_type": 2 00:28:16.005 } 00:28:16.005 ], 00:28:16.005 "driver_specific": { 00:28:16.005 "passthru": { 00:28:16.005 "name": "pt2", 00:28:16.005 "base_bdev_name": "malloc2" 00:28:16.005 } 00:28:16.005 } 00:28:16.005 }' 00:28:16.005 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:16.264 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:16.264 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:16.264 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:16.264 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:16.264 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:16.264 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:16.264 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:16.264 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:16.264 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:16.523 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:16.523 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:16.523 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:16.523 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:28:16.523 [2024-08-14 06:59:43.750635] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:16.523 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # '[' 96fce678-2b5e-4277-be32-b2bf5cc5023f '!=' 96fce678-2b5e-4277-be32-b2bf5cc5023f ']' 00:28:16.523 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:28:16.523 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:16.523 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:28:16.523 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:16.781 [2024-08-14 06:59:43.969998] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:16.781 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:16.781 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:16.781 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:16.781 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:16.781 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:16.781 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:16.781 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:16.781 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:16.781 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:16.781 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:16.781 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.781 06:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.040 06:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:17.040 "name": "raid_bdev1", 00:28:17.040 "uuid": "96fce678-2b5e-4277-be32-b2bf5cc5023f", 00:28:17.040 "strip_size_kb": 0, 00:28:17.040 "state": "online", 00:28:17.040 "raid_level": "raid1", 00:28:17.040 "superblock": true, 00:28:17.040 "num_base_bdevs": 2, 00:28:17.040 "num_base_bdevs_discovered": 1, 00:28:17.040 "num_base_bdevs_operational": 1, 00:28:17.040 "base_bdevs_list": [ 00:28:17.040 { 00:28:17.040 "name": null, 00:28:17.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.040 "is_configured": false, 00:28:17.040 "data_offset": 256, 00:28:17.040 "data_size": 7936 00:28:17.040 }, 00:28:17.040 { 00:28:17.040 "name": "pt2", 00:28:17.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:17.040 "is_configured": true, 00:28:17.040 "data_offset": 256, 00:28:17.040 "data_size": 7936 00:28:17.040 } 00:28:17.040 ] 00:28:17.040 }' 00:28:17.040 06:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:17.040 06:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:17.606 06:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:17.863 [2024-08-14 06:59:45.008218] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:17.863 [2024-08-14 06:59:45.008345] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:17.863 [2024-08-14 06:59:45.008460] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:17.863 [2024-08-14 06:59:45.008552] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:17.863 [2024-08-14 06:59:45.008608] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:28:17.863 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.863 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:28:18.121 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:28:18.121 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:28:18.121 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:28:18.121 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:28:18.121 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:18.379 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:18.379 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:28:18.379 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:28:18.379 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:28:18.379 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@534 -- # i=1 00:28:18.379 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:18.638 [2024-08-14 06:59:45.683050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:18.638 [2024-08-14 06:59:45.683259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:18.638 [2024-08-14 06:59:45.683323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:28:18.638 [2024-08-14 06:59:45.683380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:18.638 [2024-08-14 06:59:45.685640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:18.638 [2024-08-14 06:59:45.685731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:18.638 [2024-08-14 06:59:45.685823] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:18.638 [2024-08-14 06:59:45.685904] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:18.638 [2024-08-14 06:59:45.686000] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:28:18.638 [2024-08-14 06:59:45.686049] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:18.638 [2024-08-14 06:59:45.686186] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:28:18.638 [2024-08-14 06:59:45.686297] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:28:18.638 [2024-08-14 06:59:45.686338] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:28:18.638 [2024-08-14 06:59:45.686444] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:18.638 pt2 00:28:18.638 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:18.638 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:18.638 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:18.638 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:18.638 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:18.638 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:18.638 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:18.638 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:18.638 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:18.638 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:18.638 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.638 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.897 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:18.897 "name": "raid_bdev1", 00:28:18.897 "uuid": "96fce678-2b5e-4277-be32-b2bf5cc5023f", 00:28:18.897 "strip_size_kb": 0, 00:28:18.897 "state": "online", 00:28:18.897 "raid_level": "raid1", 00:28:18.897 "superblock": true, 00:28:18.897 "num_base_bdevs": 2, 00:28:18.897 "num_base_bdevs_discovered": 1, 00:28:18.897 "num_base_bdevs_operational": 1, 00:28:18.897 "base_bdevs_list": [ 00:28:18.897 { 00:28:18.897 "name": null, 00:28:18.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:18.897 "is_configured": false, 00:28:18.897 "data_offset": 256, 00:28:18.897 "data_size": 7936 00:28:18.897 }, 00:28:18.897 { 00:28:18.897 "name": "pt2", 00:28:18.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:18.897 "is_configured": true, 00:28:18.897 "data_offset": 256, 00:28:18.897 "data_size": 7936 00:28:18.897 } 00:28:18.897 ] 00:28:18.897 }' 00:28:18.897 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:18.897 06:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.464 06:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:19.723 [2024-08-14 06:59:46.733340] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:19.723 [2024-08-14 06:59:46.733473] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:19.723 [2024-08-14 06:59:46.733584] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:19.723 [2024-08-14 06:59:46.733661] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:19.723 [2024-08-14 06:59:46.733735] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:28:19.723 06:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.723 06:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:28:19.723 06:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:28:19.723 06:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:28:19.723 06:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:28:19.723 06:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:19.982 [2024-08-14 06:59:47.160580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:19.982 [2024-08-14 06:59:47.160751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:19.982 [2024-08-14 06:59:47.160793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:28:19.982 [2024-08-14 06:59:47.160822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:19.982 [2024-08-14 06:59:47.162886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:19.982 [2024-08-14 06:59:47.162972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:19.982 [2024-08-14 06:59:47.163056] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:19.982 [2024-08-14 06:59:47.163127] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:19.982 [2024-08-14 06:59:47.163284] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:19.982 [2024-08-14 06:59:47.163356] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:19.982 [2024-08-14 06:59:47.163405] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:28:19.982 [2024-08-14 06:59:47.163495] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:19.982 [2024-08-14 06:59:47.163603] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:28:19.982 [2024-08-14 06:59:47.163643] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:19.982 [2024-08-14 06:59:47.163771] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:28:19.982 [2024-08-14 06:59:47.163862] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:28:19.982 [2024-08-14 06:59:47.163899] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:28:19.982 [2024-08-14 06:59:47.163990] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:19.982 pt1 00:28:19.982 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:28:19.982 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:19.982 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:19.982 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:19.982 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:19.982 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:19.982 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:19.982 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:19.982 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:19.982 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:19.982 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:19.982 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.982 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.241 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:20.241 "name": "raid_bdev1", 00:28:20.241 "uuid": "96fce678-2b5e-4277-be32-b2bf5cc5023f", 00:28:20.241 "strip_size_kb": 0, 00:28:20.241 "state": "online", 00:28:20.241 "raid_level": "raid1", 00:28:20.241 "superblock": true, 00:28:20.241 "num_base_bdevs": 2, 00:28:20.241 "num_base_bdevs_discovered": 1, 00:28:20.241 "num_base_bdevs_operational": 1, 00:28:20.241 "base_bdevs_list": [ 00:28:20.241 { 00:28:20.241 "name": null, 00:28:20.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:20.241 "is_configured": false, 00:28:20.241 "data_offset": 256, 00:28:20.241 "data_size": 7936 00:28:20.241 }, 00:28:20.241 { 00:28:20.241 "name": "pt2", 00:28:20.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:20.241 "is_configured": true, 00:28:20.241 "data_offset": 256, 00:28:20.241 "data_size": 7936 00:28:20.241 } 00:28:20.241 ] 00:28:20.241 }' 00:28:20.241 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:20.241 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:20.808 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:28:20.808 06:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:21.066 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:28:21.066 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:21.066 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:28:21.323 [2024-08-14 06:59:48.394752] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:21.323 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # '[' 96fce678-2b5e-4277-be32-b2bf5cc5023f '!=' 96fce678-2b5e-4277-be32-b2bf5cc5023f ']' 00:28:21.323 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@578 -- # killprocess 109822 00:28:21.323 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 109822 ']' 00:28:21.323 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 109822 00:28:21.323 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:28:21.323 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:21.323 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 109822 00:28:21.323 killing process with pid 109822 00:28:21.323 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:21.323 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:21.323 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 109822' 00:28:21.323 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@965 -- # kill 109822 00:28:21.323 [2024-08-14 06:59:48.453028] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:21.323 [2024-08-14 06:59:48.453135] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:21.323 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # wait 109822 00:28:21.323 [2024-08-14 06:59:48.453246] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:21.323 [2024-08-14 06:59:48.453262] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:28:21.323 [2024-08-14 06:59:48.477830] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:21.582 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@580 -- # return 0 00:28:21.582 00:28:21.582 real 0m14.600s 00:28:21.582 user 0m26.877s 00:28:21.582 sys 0m2.205s 00:28:21.582 ************************************ 00:28:21.582 END TEST raid_superblock_test_md_interleaved 00:28:21.582 ************************************ 00:28:21.582 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:21.582 06:59:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:21.582 06:59:48 bdev_raid -- bdev/bdev_raid.sh@992 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:28:21.582 06:59:48 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:28:21.582 06:59:48 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:21.582 06:59:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:21.582 ************************************ 00:28:21.582 START TEST raid_rebuild_test_sb_md_interleaved 00:28:21.582 ************************************ 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false false 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # local verify=false 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # local strip_size 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # local create_arg 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@594 -- # local data_offset 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # raid_pid=110310 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # waitforlisten 110310 /var/tmp/spdk-raid.sock 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 110310 ']' 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:21.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:21.582 06:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:21.841 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:21.842 Zero copy mechanism will not be used. 00:28:21.842 [2024-08-14 06:59:48.856842] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:28:21.842 [2024-08-14 06:59:48.856998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110310 ] 00:28:21.842 [2024-08-14 06:59:49.005292] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.842 [2024-08-14 06:59:49.059605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.100 [2024-08-14 06:59:49.104760] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:22.100 [2024-08-14 06:59:49.104802] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:22.669 06:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:22.669 06:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:28:22.669 06:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:22.669 06:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:28:22.669 BaseBdev1_malloc 00:28:22.669 06:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:22.947 [2024-08-14 06:59:50.131196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:22.947 [2024-08-14 06:59:50.131328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:22.947 [2024-08-14 06:59:50.131358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:28:22.947 [2024-08-14 06:59:50.131375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:22.947 [2024-08-14 06:59:50.133529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:22.947 [2024-08-14 06:59:50.133633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:22.947 BaseBdev1 00:28:22.947 06:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:22.947 06:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:28:23.206 BaseBdev2_malloc 00:28:23.206 06:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:23.464 [2024-08-14 06:59:50.593335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:23.464 [2024-08-14 06:59:50.593414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:23.464 [2024-08-14 06:59:50.593438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:23.464 [2024-08-14 06:59:50.593449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:23.464 [2024-08-14 06:59:50.595547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:23.464 [2024-08-14 06:59:50.595638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:23.464 BaseBdev2 00:28:23.464 06:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:28:23.723 spare_malloc 00:28:23.723 06:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:23.981 spare_delay 00:28:23.981 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:23.981 [2024-08-14 06:59:51.221685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:23.981 [2024-08-14 06:59:51.221866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:23.981 [2024-08-14 06:59:51.221900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:23.981 [2024-08-14 06:59:51.221914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:23.981 [2024-08-14 06:59:51.224206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:23.981 [2024-08-14 06:59:51.224248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:23.981 spare 00:28:24.240 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:24.240 [2024-08-14 06:59:51.425432] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:24.240 [2024-08-14 06:59:51.427572] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:24.240 [2024-08-14 06:59:51.427786] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:28:24.241 [2024-08-14 06:59:51.427806] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:24.241 [2024-08-14 06:59:51.427935] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:28:24.241 [2024-08-14 06:59:51.428026] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:28:24.241 [2024-08-14 06:59:51.428036] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:28:24.241 [2024-08-14 06:59:51.428130] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:24.241 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:24.241 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:24.241 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:24.241 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:24.241 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:24.241 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:24.241 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:24.241 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:24.241 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:24.241 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:24.241 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.241 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.499 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:24.499 "name": "raid_bdev1", 00:28:24.499 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:24.499 "strip_size_kb": 0, 00:28:24.499 "state": "online", 00:28:24.499 "raid_level": "raid1", 00:28:24.499 "superblock": true, 00:28:24.499 "num_base_bdevs": 2, 00:28:24.499 "num_base_bdevs_discovered": 2, 00:28:24.499 "num_base_bdevs_operational": 2, 00:28:24.499 "base_bdevs_list": [ 00:28:24.499 { 00:28:24.499 "name": "BaseBdev1", 00:28:24.499 "uuid": "9f36bdf2-ebbe-5636-9dc6-76dbfc2163b9", 00:28:24.499 "is_configured": true, 00:28:24.499 "data_offset": 256, 00:28:24.499 "data_size": 7936 00:28:24.499 }, 00:28:24.499 { 00:28:24.499 "name": "BaseBdev2", 00:28:24.499 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:24.499 "is_configured": true, 00:28:24.499 "data_offset": 256, 00:28:24.499 "data_size": 7936 00:28:24.499 } 00:28:24.499 ] 00:28:24.499 }' 00:28:24.499 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:24.499 06:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.068 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:28:25.068 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:25.327 [2024-08-14 06:59:52.479850] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:25.327 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:28:25.327 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.327 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:25.587 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:28:25.587 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:28:25.587 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # '[' false = true ']' 00:28:25.587 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:25.846 [2024-08-14 06:59:52.918910] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:25.846 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:25.846 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:25.846 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:25.846 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:25.846 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:25.846 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:25.846 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:25.846 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:25.846 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:25.846 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:25.846 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.846 06:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.106 06:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:26.106 "name": "raid_bdev1", 00:28:26.106 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:26.106 "strip_size_kb": 0, 00:28:26.106 "state": "online", 00:28:26.106 "raid_level": "raid1", 00:28:26.106 "superblock": true, 00:28:26.106 "num_base_bdevs": 2, 00:28:26.106 "num_base_bdevs_discovered": 1, 00:28:26.106 "num_base_bdevs_operational": 1, 00:28:26.106 "base_bdevs_list": [ 00:28:26.106 { 00:28:26.106 "name": null, 00:28:26.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.106 "is_configured": false, 00:28:26.106 "data_offset": 256, 00:28:26.106 "data_size": 7936 00:28:26.106 }, 00:28:26.106 { 00:28:26.106 "name": "BaseBdev2", 00:28:26.106 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:26.106 "is_configured": true, 00:28:26.106 "data_offset": 256, 00:28:26.106 "data_size": 7936 00:28:26.106 } 00:28:26.106 ] 00:28:26.106 }' 00:28:26.106 06:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:26.106 06:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:26.674 06:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:26.933 [2024-08-14 06:59:53.949298] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:26.933 [2024-08-14 06:59:53.952434] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:28:26.933 [2024-08-14 06:59:53.954442] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:26.933 06:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:27.870 06:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:27.870 06:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:27.870 06:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:27.870 06:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:27.870 06:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:27.870 06:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.870 06:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.129 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:28.129 "name": "raid_bdev1", 00:28:28.129 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:28.129 "strip_size_kb": 0, 00:28:28.129 "state": "online", 00:28:28.129 "raid_level": "raid1", 00:28:28.129 "superblock": true, 00:28:28.129 "num_base_bdevs": 2, 00:28:28.129 "num_base_bdevs_discovered": 2, 00:28:28.129 "num_base_bdevs_operational": 2, 00:28:28.129 "process": { 00:28:28.129 "type": "rebuild", 00:28:28.129 "target": "spare", 00:28:28.129 "progress": { 00:28:28.129 "blocks": 3072, 00:28:28.129 "percent": 38 00:28:28.129 } 00:28:28.129 }, 00:28:28.129 "base_bdevs_list": [ 00:28:28.129 { 00:28:28.129 "name": "spare", 00:28:28.129 "uuid": "29b06102-9a66-5a51-9646-938335193e56", 00:28:28.129 "is_configured": true, 00:28:28.129 "data_offset": 256, 00:28:28.129 "data_size": 7936 00:28:28.129 }, 00:28:28.129 { 00:28:28.129 "name": "BaseBdev2", 00:28:28.129 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:28.129 "is_configured": true, 00:28:28.129 "data_offset": 256, 00:28:28.129 "data_size": 7936 00:28:28.129 } 00:28:28.129 ] 00:28:28.129 }' 00:28:28.129 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:28.129 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:28.129 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:28.129 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:28.129 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:28.389 [2024-08-14 06:59:55.481711] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:28.389 [2024-08-14 06:59:55.562837] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:28.389 [2024-08-14 06:59:55.562934] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:28.389 [2024-08-14 06:59:55.562951] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:28.389 [2024-08-14 06:59:55.562981] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:28.389 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:28.389 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:28.389 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:28.389 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:28.389 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:28.389 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:28.389 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:28.389 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:28.389 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:28.389 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:28.389 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.389 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.648 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:28.648 "name": "raid_bdev1", 00:28:28.648 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:28.648 "strip_size_kb": 0, 00:28:28.648 "state": "online", 00:28:28.648 "raid_level": "raid1", 00:28:28.649 "superblock": true, 00:28:28.649 "num_base_bdevs": 2, 00:28:28.649 "num_base_bdevs_discovered": 1, 00:28:28.649 "num_base_bdevs_operational": 1, 00:28:28.649 "base_bdevs_list": [ 00:28:28.649 { 00:28:28.649 "name": null, 00:28:28.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.649 "is_configured": false, 00:28:28.649 "data_offset": 256, 00:28:28.649 "data_size": 7936 00:28:28.649 }, 00:28:28.649 { 00:28:28.649 "name": "BaseBdev2", 00:28:28.649 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:28.649 "is_configured": true, 00:28:28.649 "data_offset": 256, 00:28:28.649 "data_size": 7936 00:28:28.649 } 00:28:28.649 ] 00:28:28.649 }' 00:28:28.649 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:28.649 06:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:29.219 06:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:29.219 06:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:29.219 06:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:29.219 06:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:29.219 06:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:29.219 06:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.219 06:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.495 06:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:29.495 "name": "raid_bdev1", 00:28:29.495 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:29.495 "strip_size_kb": 0, 00:28:29.495 "state": "online", 00:28:29.495 "raid_level": "raid1", 00:28:29.495 "superblock": true, 00:28:29.495 "num_base_bdevs": 2, 00:28:29.495 "num_base_bdevs_discovered": 1, 00:28:29.495 "num_base_bdevs_operational": 1, 00:28:29.495 "base_bdevs_list": [ 00:28:29.495 { 00:28:29.495 "name": null, 00:28:29.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.495 "is_configured": false, 00:28:29.495 "data_offset": 256, 00:28:29.495 "data_size": 7936 00:28:29.495 }, 00:28:29.495 { 00:28:29.495 "name": "BaseBdev2", 00:28:29.495 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:29.495 "is_configured": true, 00:28:29.495 "data_offset": 256, 00:28:29.495 "data_size": 7936 00:28:29.495 } 00:28:29.495 ] 00:28:29.495 }' 00:28:29.495 06:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:29.495 06:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:29.495 06:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:29.495 06:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:29.495 06:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:29.757 [2024-08-14 06:59:56.836909] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:29.757 [2024-08-14 06:59:56.840142] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:28:29.757 [2024-08-14 06:59:56.842140] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:29.757 06:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@678 -- # sleep 1 00:28:30.693 06:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:30.693 06:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:30.693 06:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:30.693 06:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:30.693 06:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:30.693 06:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.693 06:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:30.951 "name": "raid_bdev1", 00:28:30.951 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:30.951 "strip_size_kb": 0, 00:28:30.951 "state": "online", 00:28:30.951 "raid_level": "raid1", 00:28:30.951 "superblock": true, 00:28:30.951 "num_base_bdevs": 2, 00:28:30.951 "num_base_bdevs_discovered": 2, 00:28:30.951 "num_base_bdevs_operational": 2, 00:28:30.951 "process": { 00:28:30.951 "type": "rebuild", 00:28:30.951 "target": "spare", 00:28:30.951 "progress": { 00:28:30.951 "blocks": 3072, 00:28:30.951 "percent": 38 00:28:30.951 } 00:28:30.951 }, 00:28:30.951 "base_bdevs_list": [ 00:28:30.951 { 00:28:30.951 "name": "spare", 00:28:30.951 "uuid": "29b06102-9a66-5a51-9646-938335193e56", 00:28:30.951 "is_configured": true, 00:28:30.951 "data_offset": 256, 00:28:30.951 "data_size": 7936 00:28:30.951 }, 00:28:30.951 { 00:28:30.951 "name": "BaseBdev2", 00:28:30.951 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:30.951 "is_configured": true, 00:28:30.951 "data_offset": 256, 00:28:30.951 "data_size": 7936 00:28:30.951 } 00:28:30.951 ] 00:28:30.951 }' 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:28:30.951 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # local timeout=1357 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.951 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.209 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:31.209 "name": "raid_bdev1", 00:28:31.209 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:31.209 "strip_size_kb": 0, 00:28:31.209 "state": "online", 00:28:31.209 "raid_level": "raid1", 00:28:31.209 "superblock": true, 00:28:31.209 "num_base_bdevs": 2, 00:28:31.209 "num_base_bdevs_discovered": 2, 00:28:31.209 "num_base_bdevs_operational": 2, 00:28:31.209 "process": { 00:28:31.209 "type": "rebuild", 00:28:31.209 "target": "spare", 00:28:31.209 "progress": { 00:28:31.209 "blocks": 3840, 00:28:31.209 "percent": 48 00:28:31.209 } 00:28:31.209 }, 00:28:31.209 "base_bdevs_list": [ 00:28:31.209 { 00:28:31.209 "name": "spare", 00:28:31.209 "uuid": "29b06102-9a66-5a51-9646-938335193e56", 00:28:31.209 "is_configured": true, 00:28:31.209 "data_offset": 256, 00:28:31.209 "data_size": 7936 00:28:31.209 }, 00:28:31.209 { 00:28:31.209 "name": "BaseBdev2", 00:28:31.209 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:31.209 "is_configured": true, 00:28:31.209 "data_offset": 256, 00:28:31.209 "data_size": 7936 00:28:31.209 } 00:28:31.209 ] 00:28:31.209 }' 00:28:31.209 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:31.209 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:31.209 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:31.209 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:31.209 06:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:32.583 06:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:32.583 06:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:32.583 06:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:32.583 06:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:32.583 06:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:32.583 06:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:32.583 06:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:32.583 06:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:32.583 06:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:32.583 "name": "raid_bdev1", 00:28:32.583 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:32.583 "strip_size_kb": 0, 00:28:32.583 "state": "online", 00:28:32.583 "raid_level": "raid1", 00:28:32.583 "superblock": true, 00:28:32.583 "num_base_bdevs": 2, 00:28:32.583 "num_base_bdevs_discovered": 2, 00:28:32.583 "num_base_bdevs_operational": 2, 00:28:32.583 "process": { 00:28:32.583 "type": "rebuild", 00:28:32.583 "target": "spare", 00:28:32.583 "progress": { 00:28:32.583 "blocks": 7168, 00:28:32.583 "percent": 90 00:28:32.583 } 00:28:32.583 }, 00:28:32.583 "base_bdevs_list": [ 00:28:32.583 { 00:28:32.583 "name": "spare", 00:28:32.583 "uuid": "29b06102-9a66-5a51-9646-938335193e56", 00:28:32.583 "is_configured": true, 00:28:32.583 "data_offset": 256, 00:28:32.583 "data_size": 7936 00:28:32.583 }, 00:28:32.583 { 00:28:32.583 "name": "BaseBdev2", 00:28:32.583 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:32.583 "is_configured": true, 00:28:32.583 "data_offset": 256, 00:28:32.583 "data_size": 7936 00:28:32.583 } 00:28:32.583 ] 00:28:32.583 }' 00:28:32.583 06:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:32.583 06:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:32.583 06:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:32.583 06:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:32.583 06:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:32.841 [2024-08-14 06:59:59.956687] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:32.841 [2024-08-14 06:59:59.956780] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:32.841 [2024-08-14 06:59:59.956917] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:33.774 07:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:33.774 07:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:33.774 07:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:33.774 07:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:33.774 07:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:33.774 07:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:33.774 07:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.774 07:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.774 07:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:33.774 "name": "raid_bdev1", 00:28:33.774 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:33.774 "strip_size_kb": 0, 00:28:33.774 "state": "online", 00:28:33.774 "raid_level": "raid1", 00:28:33.774 "superblock": true, 00:28:33.774 "num_base_bdevs": 2, 00:28:33.774 "num_base_bdevs_discovered": 2, 00:28:33.774 "num_base_bdevs_operational": 2, 00:28:33.774 "base_bdevs_list": [ 00:28:33.774 { 00:28:33.774 "name": "spare", 00:28:33.774 "uuid": "29b06102-9a66-5a51-9646-938335193e56", 00:28:33.774 "is_configured": true, 00:28:33.774 "data_offset": 256, 00:28:33.774 "data_size": 7936 00:28:33.774 }, 00:28:33.774 { 00:28:33.774 "name": "BaseBdev2", 00:28:33.775 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:33.775 "is_configured": true, 00:28:33.775 "data_offset": 256, 00:28:33.775 "data_size": 7936 00:28:33.775 } 00:28:33.775 ] 00:28:33.775 }' 00:28:33.775 07:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:34.032 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:34.032 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:34.032 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:34.032 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@724 -- # break 00:28:34.032 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:34.032 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:34.032 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:34.032 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:34.032 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:34.032 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:34.032 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.289 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:34.289 "name": "raid_bdev1", 00:28:34.289 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:34.289 "strip_size_kb": 0, 00:28:34.289 "state": "online", 00:28:34.289 "raid_level": "raid1", 00:28:34.289 "superblock": true, 00:28:34.289 "num_base_bdevs": 2, 00:28:34.289 "num_base_bdevs_discovered": 2, 00:28:34.289 "num_base_bdevs_operational": 2, 00:28:34.289 "base_bdevs_list": [ 00:28:34.289 { 00:28:34.289 "name": "spare", 00:28:34.289 "uuid": "29b06102-9a66-5a51-9646-938335193e56", 00:28:34.289 "is_configured": true, 00:28:34.289 "data_offset": 256, 00:28:34.289 "data_size": 7936 00:28:34.289 }, 00:28:34.289 { 00:28:34.289 "name": "BaseBdev2", 00:28:34.290 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:34.290 "is_configured": true, 00:28:34.290 "data_offset": 256, 00:28:34.290 "data_size": 7936 00:28:34.290 } 00:28:34.290 ] 00:28:34.290 }' 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:34.290 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.547 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:34.548 "name": "raid_bdev1", 00:28:34.548 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:34.548 "strip_size_kb": 0, 00:28:34.548 "state": "online", 00:28:34.548 "raid_level": "raid1", 00:28:34.548 "superblock": true, 00:28:34.548 "num_base_bdevs": 2, 00:28:34.548 "num_base_bdevs_discovered": 2, 00:28:34.548 "num_base_bdevs_operational": 2, 00:28:34.548 "base_bdevs_list": [ 00:28:34.548 { 00:28:34.548 "name": "spare", 00:28:34.548 "uuid": "29b06102-9a66-5a51-9646-938335193e56", 00:28:34.548 "is_configured": true, 00:28:34.548 "data_offset": 256, 00:28:34.548 "data_size": 7936 00:28:34.548 }, 00:28:34.548 { 00:28:34.548 "name": "BaseBdev2", 00:28:34.548 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:34.548 "is_configured": true, 00:28:34.548 "data_offset": 256, 00:28:34.548 "data_size": 7936 00:28:34.548 } 00:28:34.548 ] 00:28:34.548 }' 00:28:34.548 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:34.548 07:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:35.113 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:35.113 [2024-08-14 07:00:02.237328] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:35.113 [2024-08-14 07:00:02.237445] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:35.113 [2024-08-14 07:00:02.237576] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:35.113 [2024-08-14 07:00:02.237674] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:35.113 [2024-08-14 07:00:02.237731] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:28:35.113 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.113 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # jq length 00:28:35.370 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:28:35.370 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@737 -- # '[' false = true ']' 00:28:35.370 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:28:35.370 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:35.628 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:35.628 [2024-08-14 07:00:02.820318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:35.628 [2024-08-14 07:00:02.820445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:35.628 [2024-08-14 07:00:02.820508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:28:35.628 [2024-08-14 07:00:02.820546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:35.628 [2024-08-14 07:00:02.822528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:35.628 [2024-08-14 07:00:02.822601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:35.628 [2024-08-14 07:00:02.822701] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:35.628 [2024-08-14 07:00:02.822775] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:35.628 [2024-08-14 07:00:02.822915] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:35.628 spare 00:28:35.628 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:35.628 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:35.628 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:35.628 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:35.628 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:35.628 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:35.628 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:35.628 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:35.628 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:35.628 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:35.628 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.628 07:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.886 [2024-08-14 07:00:02.922839] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:28:35.886 [2024-08-14 07:00:02.922943] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:35.886 [2024-08-14 07:00:02.923088] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:28:35.886 [2024-08-14 07:00:02.923260] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:28:35.886 [2024-08-14 07:00:02.923299] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:28:35.886 [2024-08-14 07:00:02.923413] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:35.886 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:35.886 "name": "raid_bdev1", 00:28:35.886 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:35.886 "strip_size_kb": 0, 00:28:35.886 "state": "online", 00:28:35.886 "raid_level": "raid1", 00:28:35.886 "superblock": true, 00:28:35.886 "num_base_bdevs": 2, 00:28:35.886 "num_base_bdevs_discovered": 2, 00:28:35.886 "num_base_bdevs_operational": 2, 00:28:35.886 "base_bdevs_list": [ 00:28:35.886 { 00:28:35.886 "name": "spare", 00:28:35.886 "uuid": "29b06102-9a66-5a51-9646-938335193e56", 00:28:35.886 "is_configured": true, 00:28:35.886 "data_offset": 256, 00:28:35.886 "data_size": 7936 00:28:35.886 }, 00:28:35.886 { 00:28:35.886 "name": "BaseBdev2", 00:28:35.886 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:35.886 "is_configured": true, 00:28:35.886 "data_offset": 256, 00:28:35.886 "data_size": 7936 00:28:35.886 } 00:28:35.886 ] 00:28:35.886 }' 00:28:35.886 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:35.886 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:36.452 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:36.452 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:36.452 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:36.452 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:36.452 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:36.452 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.452 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:36.709 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:36.709 "name": "raid_bdev1", 00:28:36.709 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:36.709 "strip_size_kb": 0, 00:28:36.709 "state": "online", 00:28:36.709 "raid_level": "raid1", 00:28:36.709 "superblock": true, 00:28:36.709 "num_base_bdevs": 2, 00:28:36.709 "num_base_bdevs_discovered": 2, 00:28:36.709 "num_base_bdevs_operational": 2, 00:28:36.709 "base_bdevs_list": [ 00:28:36.709 { 00:28:36.709 "name": "spare", 00:28:36.709 "uuid": "29b06102-9a66-5a51-9646-938335193e56", 00:28:36.709 "is_configured": true, 00:28:36.710 "data_offset": 256, 00:28:36.710 "data_size": 7936 00:28:36.710 }, 00:28:36.710 { 00:28:36.710 "name": "BaseBdev2", 00:28:36.710 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:36.710 "is_configured": true, 00:28:36.710 "data_offset": 256, 00:28:36.710 "data_size": 7936 00:28:36.710 } 00:28:36.710 ] 00:28:36.710 }' 00:28:36.710 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:36.710 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:36.710 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:36.710 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:36.710 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:36.710 07:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.972 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:28:36.972 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:37.230 [2024-08-14 07:00:04.266030] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:37.230 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:37.230 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:37.230 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:37.230 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:37.230 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:37.230 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:37.230 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:37.230 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:37.230 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:37.230 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:37.230 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.230 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.488 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:37.488 "name": "raid_bdev1", 00:28:37.488 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:37.488 "strip_size_kb": 0, 00:28:37.488 "state": "online", 00:28:37.488 "raid_level": "raid1", 00:28:37.488 "superblock": true, 00:28:37.488 "num_base_bdevs": 2, 00:28:37.488 "num_base_bdevs_discovered": 1, 00:28:37.488 "num_base_bdevs_operational": 1, 00:28:37.488 "base_bdevs_list": [ 00:28:37.488 { 00:28:37.488 "name": null, 00:28:37.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.488 "is_configured": false, 00:28:37.488 "data_offset": 256, 00:28:37.488 "data_size": 7936 00:28:37.488 }, 00:28:37.488 { 00:28:37.488 "name": "BaseBdev2", 00:28:37.488 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:37.488 "is_configured": true, 00:28:37.488 "data_offset": 256, 00:28:37.488 "data_size": 7936 00:28:37.488 } 00:28:37.488 ] 00:28:37.488 }' 00:28:37.488 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:37.488 07:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:38.054 07:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:38.054 [2024-08-14 07:00:05.220461] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:38.054 [2024-08-14 07:00:05.220727] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:38.054 [2024-08-14 07:00:05.220807] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:38.054 [2024-08-14 07:00:05.220922] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:38.054 [2024-08-14 07:00:05.223850] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:28:38.054 [2024-08-14 07:00:05.225715] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:38.054 07:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # sleep 1 00:28:39.428 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:39.428 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:39.428 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:39.428 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:39.428 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:39.428 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.428 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.428 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:39.428 "name": "raid_bdev1", 00:28:39.428 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:39.428 "strip_size_kb": 0, 00:28:39.428 "state": "online", 00:28:39.428 "raid_level": "raid1", 00:28:39.428 "superblock": true, 00:28:39.428 "num_base_bdevs": 2, 00:28:39.428 "num_base_bdevs_discovered": 2, 00:28:39.428 "num_base_bdevs_operational": 2, 00:28:39.428 "process": { 00:28:39.428 "type": "rebuild", 00:28:39.428 "target": "spare", 00:28:39.428 "progress": { 00:28:39.428 "blocks": 3072, 00:28:39.428 "percent": 38 00:28:39.428 } 00:28:39.428 }, 00:28:39.428 "base_bdevs_list": [ 00:28:39.428 { 00:28:39.428 "name": "spare", 00:28:39.428 "uuid": "29b06102-9a66-5a51-9646-938335193e56", 00:28:39.428 "is_configured": true, 00:28:39.428 "data_offset": 256, 00:28:39.428 "data_size": 7936 00:28:39.428 }, 00:28:39.428 { 00:28:39.428 "name": "BaseBdev2", 00:28:39.428 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:39.428 "is_configured": true, 00:28:39.428 "data_offset": 256, 00:28:39.428 "data_size": 7936 00:28:39.428 } 00:28:39.428 ] 00:28:39.428 }' 00:28:39.428 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:39.428 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:39.428 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:39.428 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:39.428 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:39.686 [2024-08-14 07:00:06.786984] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:39.686 [2024-08-14 07:00:06.832693] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:39.686 [2024-08-14 07:00:06.832816] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:39.686 [2024-08-14 07:00:06.832832] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:39.686 [2024-08-14 07:00:06.832842] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:39.686 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:39.686 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:39.686 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:39.686 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:39.686 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:39.686 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:39.686 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:39.686 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:39.686 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:39.686 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:39.686 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.686 07:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.945 07:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:39.945 "name": "raid_bdev1", 00:28:39.945 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:39.945 "strip_size_kb": 0, 00:28:39.945 "state": "online", 00:28:39.945 "raid_level": "raid1", 00:28:39.945 "superblock": true, 00:28:39.945 "num_base_bdevs": 2, 00:28:39.945 "num_base_bdevs_discovered": 1, 00:28:39.945 "num_base_bdevs_operational": 1, 00:28:39.945 "base_bdevs_list": [ 00:28:39.945 { 00:28:39.945 "name": null, 00:28:39.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.945 "is_configured": false, 00:28:39.945 "data_offset": 256, 00:28:39.945 "data_size": 7936 00:28:39.945 }, 00:28:39.945 { 00:28:39.945 "name": "BaseBdev2", 00:28:39.945 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:39.945 "is_configured": true, 00:28:39.945 "data_offset": 256, 00:28:39.945 "data_size": 7936 00:28:39.945 } 00:28:39.945 ] 00:28:39.945 }' 00:28:39.945 07:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:39.945 07:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:40.511 07:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:40.770 [2024-08-14 07:00:07.902331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:40.770 [2024-08-14 07:00:07.902495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:40.770 [2024-08-14 07:00:07.902549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:28:40.770 [2024-08-14 07:00:07.902606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:40.770 [2024-08-14 07:00:07.902833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:40.770 [2024-08-14 07:00:07.902885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:40.770 [2024-08-14 07:00:07.902979] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:40.770 [2024-08-14 07:00:07.903022] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:40.770 [2024-08-14 07:00:07.903083] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:40.770 [2024-08-14 07:00:07.903139] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:40.770 [2024-08-14 07:00:07.906071] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:28:40.770 [2024-08-14 07:00:07.908252] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:40.770 spare 00:28:40.770 07:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # sleep 1 00:28:41.703 07:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:41.703 07:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:41.703 07:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:41.703 07:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:41.703 07:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:41.703 07:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.704 07:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.962 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:41.962 "name": "raid_bdev1", 00:28:41.962 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:41.962 "strip_size_kb": 0, 00:28:41.962 "state": "online", 00:28:41.962 "raid_level": "raid1", 00:28:41.962 "superblock": true, 00:28:41.962 "num_base_bdevs": 2, 00:28:41.962 "num_base_bdevs_discovered": 2, 00:28:41.962 "num_base_bdevs_operational": 2, 00:28:41.962 "process": { 00:28:41.962 "type": "rebuild", 00:28:41.962 "target": "spare", 00:28:41.962 "progress": { 00:28:41.962 "blocks": 3072, 00:28:41.962 "percent": 38 00:28:41.962 } 00:28:41.962 }, 00:28:41.962 "base_bdevs_list": [ 00:28:41.962 { 00:28:41.962 "name": "spare", 00:28:41.962 "uuid": "29b06102-9a66-5a51-9646-938335193e56", 00:28:41.962 "is_configured": true, 00:28:41.962 "data_offset": 256, 00:28:41.962 "data_size": 7936 00:28:41.962 }, 00:28:41.962 { 00:28:41.962 "name": "BaseBdev2", 00:28:41.962 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:41.962 "is_configured": true, 00:28:41.962 "data_offset": 256, 00:28:41.962 "data_size": 7936 00:28:41.962 } 00:28:41.962 ] 00:28:41.962 }' 00:28:41.962 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:41.962 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:42.219 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:42.219 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:42.219 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:42.477 [2024-08-14 07:00:09.483159] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:42.477 [2024-08-14 07:00:09.515015] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:42.477 [2024-08-14 07:00:09.515196] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:42.477 [2024-08-14 07:00:09.515221] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:42.477 [2024-08-14 07:00:09.515231] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:42.477 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:42.477 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:42.477 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:42.477 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:42.477 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:42.477 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:42.477 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:42.477 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:42.477 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:42.477 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:42.477 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.477 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.735 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:42.735 "name": "raid_bdev1", 00:28:42.735 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:42.735 "strip_size_kb": 0, 00:28:42.735 "state": "online", 00:28:42.735 "raid_level": "raid1", 00:28:42.735 "superblock": true, 00:28:42.735 "num_base_bdevs": 2, 00:28:42.735 "num_base_bdevs_discovered": 1, 00:28:42.735 "num_base_bdevs_operational": 1, 00:28:42.735 "base_bdevs_list": [ 00:28:42.735 { 00:28:42.735 "name": null, 00:28:42.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:42.735 "is_configured": false, 00:28:42.735 "data_offset": 256, 00:28:42.735 "data_size": 7936 00:28:42.735 }, 00:28:42.735 { 00:28:42.735 "name": "BaseBdev2", 00:28:42.735 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:42.735 "is_configured": true, 00:28:42.735 "data_offset": 256, 00:28:42.735 "data_size": 7936 00:28:42.735 } 00:28:42.735 ] 00:28:42.735 }' 00:28:42.735 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:42.735 07:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:43.300 07:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:43.300 07:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:43.300 07:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:43.300 07:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:43.300 07:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:43.300 07:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.300 07:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.559 07:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:43.559 "name": "raid_bdev1", 00:28:43.559 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:43.559 "strip_size_kb": 0, 00:28:43.559 "state": "online", 00:28:43.559 "raid_level": "raid1", 00:28:43.559 "superblock": true, 00:28:43.559 "num_base_bdevs": 2, 00:28:43.559 "num_base_bdevs_discovered": 1, 00:28:43.559 "num_base_bdevs_operational": 1, 00:28:43.559 "base_bdevs_list": [ 00:28:43.559 { 00:28:43.559 "name": null, 00:28:43.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:43.559 "is_configured": false, 00:28:43.559 "data_offset": 256, 00:28:43.559 "data_size": 7936 00:28:43.559 }, 00:28:43.559 { 00:28:43.559 "name": "BaseBdev2", 00:28:43.559 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:43.559 "is_configured": true, 00:28:43.559 "data_offset": 256, 00:28:43.559 "data_size": 7936 00:28:43.559 } 00:28:43.559 ] 00:28:43.559 }' 00:28:43.559 07:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:43.559 07:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:43.559 07:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:43.559 07:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:43.559 07:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:43.817 07:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:44.076 [2024-08-14 07:00:11.116398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:44.076 [2024-08-14 07:00:11.116477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:44.076 [2024-08-14 07:00:11.116502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:44.076 [2024-08-14 07:00:11.116513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:44.076 [2024-08-14 07:00:11.116704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:44.076 [2024-08-14 07:00:11.116720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:44.076 [2024-08-14 07:00:11.116783] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:44.076 [2024-08-14 07:00:11.116797] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:44.076 [2024-08-14 07:00:11.116807] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:44.076 BaseBdev1 00:28:44.076 07:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@789 -- # sleep 1 00:28:45.009 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:45.009 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:45.010 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:45.010 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:45.010 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:45.010 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:45.010 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:45.010 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:45.010 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:45.010 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:45.010 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.010 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.268 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:45.268 "name": "raid_bdev1", 00:28:45.268 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:45.268 "strip_size_kb": 0, 00:28:45.268 "state": "online", 00:28:45.268 "raid_level": "raid1", 00:28:45.268 "superblock": true, 00:28:45.268 "num_base_bdevs": 2, 00:28:45.268 "num_base_bdevs_discovered": 1, 00:28:45.268 "num_base_bdevs_operational": 1, 00:28:45.268 "base_bdevs_list": [ 00:28:45.268 { 00:28:45.268 "name": null, 00:28:45.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:45.268 "is_configured": false, 00:28:45.268 "data_offset": 256, 00:28:45.268 "data_size": 7936 00:28:45.268 }, 00:28:45.268 { 00:28:45.268 "name": "BaseBdev2", 00:28:45.268 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:45.268 "is_configured": true, 00:28:45.268 "data_offset": 256, 00:28:45.268 "data_size": 7936 00:28:45.268 } 00:28:45.268 ] 00:28:45.268 }' 00:28:45.268 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:45.268 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:45.833 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:45.833 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:45.833 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:45.833 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:45.833 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:45.833 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.833 07:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:46.147 "name": "raid_bdev1", 00:28:46.147 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:46.147 "strip_size_kb": 0, 00:28:46.147 "state": "online", 00:28:46.147 "raid_level": "raid1", 00:28:46.147 "superblock": true, 00:28:46.147 "num_base_bdevs": 2, 00:28:46.147 "num_base_bdevs_discovered": 1, 00:28:46.147 "num_base_bdevs_operational": 1, 00:28:46.147 "base_bdevs_list": [ 00:28:46.147 { 00:28:46.147 "name": null, 00:28:46.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:46.147 "is_configured": false, 00:28:46.147 "data_offset": 256, 00:28:46.147 "data_size": 7936 00:28:46.147 }, 00:28:46.147 { 00:28:46.147 "name": "BaseBdev2", 00:28:46.147 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:46.147 "is_configured": true, 00:28:46.147 "data_offset": 256, 00:28:46.147 "data_size": 7936 00:28:46.147 } 00:28:46.147 ] 00:28:46.147 }' 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@646 -- # local es=0 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:46.147 [2024-08-14 07:00:13.376793] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:46.147 [2024-08-14 07:00:13.377052] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:46.147 [2024-08-14 07:00:13.377114] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:46.147 request: 00:28:46.147 { 00:28:46.147 "base_bdev": "BaseBdev1", 00:28:46.147 "raid_bdev": "raid_bdev1", 00:28:46.147 "method": "bdev_raid_add_base_bdev", 00:28:46.147 "req_id": 1 00:28:46.147 } 00:28:46.147 Got JSON-RPC error response 00:28:46.147 response: 00:28:46.147 { 00:28:46.147 "code": -22, 00:28:46.147 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:46.147 } 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@649 -- # es=1 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:28:46.147 07:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@793 -- # sleep 1 00:28:47.520 07:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:47.520 07:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:47.520 07:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:47.520 07:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:47.520 07:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:47.520 07:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:47.520 07:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:47.520 07:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:47.520 07:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:47.520 07:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:47.520 07:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.520 07:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.520 07:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:47.520 "name": "raid_bdev1", 00:28:47.520 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:47.520 "strip_size_kb": 0, 00:28:47.520 "state": "online", 00:28:47.520 "raid_level": "raid1", 00:28:47.520 "superblock": true, 00:28:47.520 "num_base_bdevs": 2, 00:28:47.520 "num_base_bdevs_discovered": 1, 00:28:47.520 "num_base_bdevs_operational": 1, 00:28:47.520 "base_bdevs_list": [ 00:28:47.520 { 00:28:47.520 "name": null, 00:28:47.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:47.520 "is_configured": false, 00:28:47.520 "data_offset": 256, 00:28:47.520 "data_size": 7936 00:28:47.520 }, 00:28:47.520 { 00:28:47.520 "name": "BaseBdev2", 00:28:47.520 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:47.520 "is_configured": true, 00:28:47.520 "data_offset": 256, 00:28:47.520 "data_size": 7936 00:28:47.520 } 00:28:47.520 ] 00:28:47.520 }' 00:28:47.520 07:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:47.520 07:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:48.086 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:48.086 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:48.086 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:48.086 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:48.086 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:48.086 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.086 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:48.345 "name": "raid_bdev1", 00:28:48.345 "uuid": "cc0bd0ac-4a5c-4722-b171-3c9730ecbff4", 00:28:48.345 "strip_size_kb": 0, 00:28:48.345 "state": "online", 00:28:48.345 "raid_level": "raid1", 00:28:48.345 "superblock": true, 00:28:48.345 "num_base_bdevs": 2, 00:28:48.345 "num_base_bdevs_discovered": 1, 00:28:48.345 "num_base_bdevs_operational": 1, 00:28:48.345 "base_bdevs_list": [ 00:28:48.345 { 00:28:48.345 "name": null, 00:28:48.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.345 "is_configured": false, 00:28:48.345 "data_offset": 256, 00:28:48.345 "data_size": 7936 00:28:48.345 }, 00:28:48.345 { 00:28:48.345 "name": "BaseBdev2", 00:28:48.345 "uuid": "88c31658-4a72-585f-8e2c-5f167f96f4f6", 00:28:48.345 "is_configured": true, 00:28:48.345 "data_offset": 256, 00:28:48.345 "data_size": 7936 00:28:48.345 } 00:28:48.345 ] 00:28:48.345 }' 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@798 -- # killprocess 110310 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 110310 ']' 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 110310 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 110310 00:28:48.345 killing process with pid 110310 00:28:48.345 Received shutdown signal, test time was about 60.000000 seconds 00:28:48.345 00:28:48.345 Latency(us) 00:28:48.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.345 =================================================================================================================== 00:28:48.345 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 110310' 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 110310 00:28:48.345 [2024-08-14 07:00:15.477778] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:48.345 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 110310 00:28:48.345 [2024-08-14 07:00:15.477963] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:48.345 [2024-08-14 07:00:15.478026] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:48.345 [2024-08-14 07:00:15.478038] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:28:48.345 [2024-08-14 07:00:15.513292] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:48.604 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@800 -- # return 0 00:28:48.604 00:28:48.604 real 0m26.975s 00:28:48.604 user 0m42.851s 00:28:48.604 sys 0m2.880s 00:28:48.604 ************************************ 00:28:48.604 END TEST raid_rebuild_test_sb_md_interleaved 00:28:48.604 ************************************ 00:28:48.604 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:48.604 07:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:48.604 07:00:15 bdev_raid -- bdev/bdev_raid.sh@994 -- # trap - EXIT 00:28:48.604 07:00:15 bdev_raid -- bdev/bdev_raid.sh@995 -- # cleanup 00:28:48.604 07:00:15 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 110310 ']' 00:28:48.604 07:00:15 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 110310 00:28:48.604 07:00:15 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:28:48.604 ************************************ 00:28:48.604 END TEST bdev_raid 00:28:48.604 ************************************ 00:28:48.604 00:28:48.604 real 22m24.474s 00:28:48.604 user 38m19.956s 00:28:48.604 sys 3m19.603s 00:28:48.604 07:00:15 bdev_raid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:48.604 07:00:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:48.604 07:00:15 -- spdk/autotest.sh@203 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:28:48.604 07:00:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:48.604 07:00:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:48.604 07:00:15 -- common/autotest_common.sh@10 -- # set +x 00:28:48.604 ************************************ 00:28:48.604 START TEST spdkcli_raid 00:28:48.604 ************************************ 00:28:48.604 07:00:15 spdkcli_raid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:28:48.863 * Looking for test storage... 00:28:48.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:48.863 07:00:15 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:28:48.863 07:00:15 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:28:48.863 07:00:15 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:28:48.864 07:00:15 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:28:48.864 07:00:15 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:28:48.864 07:00:15 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:28:48.864 07:00:15 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:28:48.864 07:00:15 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:28:48.864 07:00:15 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:48.864 07:00:15 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:48.864 07:00:15 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:28:48.864 07:00:15 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:28:48.864 07:00:15 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:28:48.864 07:00:15 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:28:48.864 07:00:15 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:28:48.864 07:00:15 spdkcli_raid -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:48.864 07:00:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:48.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.864 07:00:15 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:28:48.864 07:00:15 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:28:48.864 07:00:15 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=111086 00:28:48.864 07:00:15 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 111086 00:28:48.864 07:00:15 spdkcli_raid -- common/autotest_common.sh@827 -- # '[' -z 111086 ']' 00:28:48.864 07:00:15 spdkcli_raid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.864 07:00:15 spdkcli_raid -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:48.864 07:00:15 spdkcli_raid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.864 07:00:15 spdkcli_raid -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:48.864 07:00:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:48.864 [2024-08-14 07:00:16.057458] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:28:48.864 [2024-08-14 07:00:16.057695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111086 ] 00:28:49.122 [2024-08-14 07:00:16.210120] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:49.123 [2024-08-14 07:00:16.263612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.123 [2024-08-14 07:00:16.263643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.689 07:00:16 spdkcli_raid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:49.689 07:00:16 spdkcli_raid -- common/autotest_common.sh@860 -- # return 0 00:28:49.689 07:00:16 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:28:49.689 07:00:16 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:49.689 07:00:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:49.947 07:00:16 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:28:49.947 07:00:16 spdkcli_raid -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:49.947 07:00:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:49.947 07:00:16 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:49.947 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:49.947 ' 00:28:51.323 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:28:51.323 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:28:51.581 07:00:18 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:28:51.581 07:00:18 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.581 07:00:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:51.581 07:00:18 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:28:51.581 07:00:18 spdkcli_raid -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:51.581 07:00:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:51.581 07:00:18 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:28:51.581 ' 00:28:52.515 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:28:52.515 07:00:19 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:28:52.515 07:00:19 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:52.515 07:00:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:52.774 07:00:19 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:28:52.774 07:00:19 spdkcli_raid -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:52.774 07:00:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:52.774 07:00:19 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:28:52.774 07:00:19 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:28:53.341 07:00:20 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:28:53.341 07:00:20 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:28:53.341 07:00:20 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:28:53.341 07:00:20 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.341 07:00:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:53.341 07:00:20 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:28:53.341 07:00:20 spdkcli_raid -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:53.341 07:00:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:53.341 07:00:20 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:28:53.341 ' 00:28:54.315 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:28:54.315 07:00:21 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:28:54.315 07:00:21 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:54.315 07:00:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:54.315 07:00:21 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:28:54.315 07:00:21 spdkcli_raid -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:54.315 07:00:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:54.315 07:00:21 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:28:54.315 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:28:54.315 ' 00:28:55.691 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:28:55.691 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:28:55.950 07:00:22 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:28:55.950 07:00:22 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.950 07:00:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:55.950 07:00:23 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 111086 00:28:55.950 07:00:23 spdkcli_raid -- common/autotest_common.sh@946 -- # '[' -z 111086 ']' 00:28:55.950 07:00:23 spdkcli_raid -- common/autotest_common.sh@950 -- # kill -0 111086 00:28:55.950 07:00:23 spdkcli_raid -- common/autotest_common.sh@951 -- # uname 00:28:55.950 07:00:23 spdkcli_raid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:55.950 07:00:23 spdkcli_raid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111086 00:28:55.950 07:00:23 spdkcli_raid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:55.950 07:00:23 spdkcli_raid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:55.950 07:00:23 spdkcli_raid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111086' 00:28:55.950 killing process with pid 111086 00:28:55.950 07:00:23 spdkcli_raid -- common/autotest_common.sh@965 -- # kill 111086 00:28:55.950 07:00:23 spdkcli_raid -- common/autotest_common.sh@970 -- # wait 111086 00:28:56.518 07:00:23 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:28:56.518 07:00:23 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 111086 ']' 00:28:56.518 07:00:23 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 111086 00:28:56.518 07:00:23 spdkcli_raid -- common/autotest_common.sh@946 -- # '[' -z 111086 ']' 00:28:56.518 07:00:23 spdkcli_raid -- common/autotest_common.sh@950 -- # kill -0 111086 00:28:56.518 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (111086) - No such process 00:28:56.518 07:00:23 spdkcli_raid -- common/autotest_common.sh@973 -- # echo 'Process with pid 111086 is not found' 00:28:56.518 Process with pid 111086 is not found 00:28:56.518 07:00:23 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:28:56.518 07:00:23 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:56.518 07:00:23 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:56.518 07:00:23 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:56.518 ************************************ 00:28:56.518 END TEST spdkcli_raid 00:28:56.518 ************************************ 00:28:56.518 00:28:56.518 real 0m7.621s 00:28:56.518 user 0m16.301s 00:28:56.518 sys 0m1.011s 00:28:56.518 07:00:23 spdkcli_raid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:56.518 07:00:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:56.518 07:00:23 -- spdk/autotest.sh@204 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:28:56.518 07:00:23 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:56.518 07:00:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:56.518 07:00:23 -- common/autotest_common.sh@10 -- # set +x 00:28:56.518 ************************************ 00:28:56.518 START TEST blockdev_raid5f 00:28:56.518 ************************************ 00:28:56.518 07:00:23 blockdev_raid5f -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:28:56.518 * Looking for test storage... 00:28:56.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=111331 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 111331 00:28:56.518 07:00:23 blockdev_raid5f -- common/autotest_common.sh@827 -- # '[' -z 111331 ']' 00:28:56.518 07:00:23 blockdev_raid5f -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.518 07:00:23 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:56.518 07:00:23 blockdev_raid5f -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:56.518 07:00:23 blockdev_raid5f -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.518 07:00:23 blockdev_raid5f -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:56.518 07:00:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:56.518 [2024-08-14 07:00:23.758120] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:28:56.518 [2024-08-14 07:00:23.758444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111331 ] 00:28:56.776 [2024-08-14 07:00:23.907340] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.776 [2024-08-14 07:00:23.963494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@860 -- # return 0 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@557 -- # xtrace_disable 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:57.714 Malloc0 00:28:57.714 Malloc1 00:28:57.714 Malloc2 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@557 -- # xtrace_disable 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@557 -- # xtrace_disable 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@557 -- # xtrace_disable 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@557 -- # xtrace_disable 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@557 -- # xtrace_disable 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "d78ecaf9-6c70-45d5-a469-66b08b965d6e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d78ecaf9-6c70-45d5-a469-66b08b965d6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "d78ecaf9-6c70-45d5-a469-66b08b965d6e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9c022fe6-e216-41e2-bbc6-14ff8d1258e5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "23c5e8d9-a209-4257-bad2-88071ae0efcc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "4cab3aa0-74b4-4e07-94b2-9404bc604625",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:28:57.714 07:00:24 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 111331 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@946 -- # '[' -z 111331 ']' 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@950 -- # kill -0 111331 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@951 -- # uname 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111331 00:28:57.714 killing process with pid 111331 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111331' 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@965 -- # kill 111331 00:28:57.714 07:00:24 blockdev_raid5f -- common/autotest_common.sh@970 -- # wait 111331 00:28:58.282 07:00:25 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:58.282 07:00:25 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:28:58.282 07:00:25 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:28:58.282 07:00:25 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:58.282 07:00:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:58.282 ************************************ 00:28:58.282 START TEST bdev_hello_world 00:28:58.282 ************************************ 00:28:58.282 07:00:25 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:28:58.282 [2024-08-14 07:00:25.429580] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:28:58.282 [2024-08-14 07:00:25.429872] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111369 ] 00:28:58.540 [2024-08-14 07:00:25.586270] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.540 [2024-08-14 07:00:25.637964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.799 [2024-08-14 07:00:25.822032] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:58.799 [2024-08-14 07:00:25.822088] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:28:58.799 [2024-08-14 07:00:25.822135] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:58.800 [2024-08-14 07:00:25.822494] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:58.800 [2024-08-14 07:00:25.822658] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:58.800 [2024-08-14 07:00:25.822687] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:58.800 [2024-08-14 07:00:25.822756] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:58.800 00:28:58.800 [2024-08-14 07:00:25.822782] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:59.147 00:28:59.147 real 0m0.731s 00:28:59.147 user 0m0.389s 00:28:59.147 sys 0m0.228s 00:28:59.147 07:00:26 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:59.147 07:00:26 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:28:59.147 ************************************ 00:28:59.147 END TEST bdev_hello_world 00:28:59.147 ************************************ 00:28:59.147 07:00:26 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:28:59.147 07:00:26 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:59.147 07:00:26 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:59.147 07:00:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:59.147 ************************************ 00:28:59.147 START TEST bdev_bounds 00:28:59.147 ************************************ 00:28:59.147 07:00:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:28:59.147 07:00:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=111396 00:28:59.147 07:00:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:59.147 07:00:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:59.147 Process bdevio pid: 111396 00:28:59.147 07:00:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 111396' 00:28:59.147 07:00:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 111396 00:28:59.147 07:00:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 111396 ']' 00:28:59.147 07:00:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.147 07:00:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:59.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.147 07:00:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.147 07:00:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:59.147 07:00:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:59.147 [2024-08-14 07:00:26.200055] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:28:59.147 [2024-08-14 07:00:26.200232] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111396 ] 00:28:59.147 [2024-08-14 07:00:26.345363] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:59.405 [2024-08-14 07:00:26.399020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.405 [2024-08-14 07:00:26.399088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.405 [2024-08-14 07:00:26.399231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.973 07:00:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:59.973 07:00:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:28:59.973 07:00:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:59.973 I/O targets: 00:28:59.973 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:28:59.973 00:28:59.973 00:28:59.973 CUnit - A unit testing framework for C - Version 2.1-3 00:28:59.973 http://cunit.sourceforge.net/ 00:28:59.973 00:28:59.973 00:28:59.973 Suite: bdevio tests on: raid5f 00:28:59.973 Test: blockdev write read block ...passed 00:28:59.973 Test: blockdev write zeroes read block ...passed 00:29:00.232 Test: blockdev write zeroes read no split ...passed 00:29:00.232 Test: blockdev write zeroes read split ...passed 00:29:00.232 Test: blockdev write zeroes read split partial ...passed 00:29:00.232 Test: blockdev reset ...passed 00:29:00.232 Test: blockdev write read 8 blocks ...passed 00:29:00.232 Test: blockdev write read size > 128k ...passed 00:29:00.232 Test: blockdev write read invalid size ...passed 00:29:00.232 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:00.232 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:00.232 Test: blockdev write read max offset ...passed 00:29:00.232 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:00.232 Test: blockdev writev readv 8 blocks ...passed 00:29:00.232 Test: blockdev writev readv 30 x 1block ...passed 00:29:00.232 Test: blockdev writev readv block ...passed 00:29:00.232 Test: blockdev writev readv size > 128k ...passed 00:29:00.232 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:00.232 Test: blockdev comparev and writev ...passed 00:29:00.232 Test: blockdev nvme passthru rw ...passed 00:29:00.232 Test: blockdev nvme passthru vendor specific ...passed 00:29:00.232 Test: blockdev nvme admin passthru ...passed 00:29:00.232 Test: blockdev copy ...passed 00:29:00.232 00:29:00.232 Run Summary: Type Total Ran Passed Failed Inactive 00:29:00.232 suites 1 1 n/a 0 0 00:29:00.232 tests 23 23 23 0 0 00:29:00.232 asserts 130 130 130 0 n/a 00:29:00.232 00:29:00.232 Elapsed time = 0.401 seconds 00:29:00.232 0 00:29:00.232 07:00:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 111396 00:29:00.232 07:00:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 111396 ']' 00:29:00.232 07:00:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 111396 00:29:00.232 07:00:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:29:00.232 07:00:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:00.232 07:00:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111396 00:29:00.232 07:00:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:00.232 07:00:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:00.232 07:00:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111396' 00:29:00.232 killing process with pid 111396 00:29:00.232 07:00:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@965 -- # kill 111396 00:29:00.232 07:00:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # wait 111396 00:29:00.491 07:00:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:29:00.491 00:29:00.491 real 0m1.616s 00:29:00.491 user 0m3.981s 00:29:00.491 sys 0m0.351s 00:29:00.491 07:00:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:00.491 07:00:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:00.491 ************************************ 00:29:00.491 END TEST bdev_bounds 00:29:00.491 ************************************ 00:29:00.751 07:00:27 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:29:00.751 07:00:27 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:29:00.751 07:00:27 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:00.751 07:00:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:00.751 ************************************ 00:29:00.751 START TEST bdev_nbd 00:29:00.751 ************************************ 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=111449 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 111449 /var/tmp/spdk-nbd.sock 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 111449 ']' 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:00.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:00.751 07:00:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:00.751 [2024-08-14 07:00:27.901888] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:29:00.751 [2024-08-14 07:00:27.902033] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.010 [2024-08-14 07:00:28.053145] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.010 [2024-08-14 07:00:28.108548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.578 07:00:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:01.578 07:00:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:29:01.578 07:00:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:29:01.578 07:00:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:01.578 07:00:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:29:01.578 07:00:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:01.578 07:00:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:29:01.578 07:00:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:01.578 07:00:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:29:01.578 07:00:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:01.578 07:00:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:29:01.578 07:00:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:01.578 07:00:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:01.578 07:00:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:01.578 07:00:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:29:01.836 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:01.836 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:01.836 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:01.836 07:00:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:29:01.836 07:00:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:29:01.836 07:00:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:01.836 07:00:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:01.836 07:00:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:29:01.836 07:00:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:29:01.836 07:00:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:01.836 07:00:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:01.837 07:00:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:01.837 1+0 records in 00:29:01.837 1+0 records out 00:29:01.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00399129 s, 1.0 MB/s 00:29:01.837 07:00:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:02.095 07:00:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:29:02.095 07:00:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:02.095 07:00:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:02.095 07:00:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:29:02.095 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:02.095 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:02.096 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:02.096 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:02.096 { 00:29:02.096 "nbd_device": "/dev/nbd0", 00:29:02.096 "bdev_name": "raid5f" 00:29:02.096 } 00:29:02.096 ]' 00:29:02.096 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:02.096 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:02.096 { 00:29:02.096 "nbd_device": "/dev/nbd0", 00:29:02.096 "bdev_name": "raid5f" 00:29:02.096 } 00:29:02.096 ]' 00:29:02.096 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:02.355 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:02.355 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:02.355 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:02.355 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:02.355 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:02.355 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:02.355 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:02.613 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:02.613 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:02.613 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:02.614 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:02.614 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:02.614 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:02.614 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:02.614 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:02.614 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:02.614 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:02.614 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:02.614 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:02.614 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:02.614 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:02.872 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:02.872 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:02.872 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:02.872 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:02.872 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:02.872 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:02.872 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:29:02.872 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:02.872 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:29:02.872 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:29:02.872 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:02.872 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:29:02.872 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:02.873 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:29:02.873 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:02.873 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:29:02.873 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:02.873 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:29:02.873 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:02.873 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:02.873 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:02.873 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:29:02.873 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:02.873 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:02.873 07:00:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:29:02.873 /dev/nbd0 00:29:02.873 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:02.873 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:02.873 07:00:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:29:02.873 07:00:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:29:02.873 07:00:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:02.873 07:00:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:02.873 07:00:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:03.132 1+0 records in 00:29:03.132 1+0 records out 00:29:03.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480002 s, 8.5 MB/s 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:03.132 { 00:29:03.132 "nbd_device": "/dev/nbd0", 00:29:03.132 "bdev_name": "raid5f" 00:29:03.132 } 00:29:03.132 ]' 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:03.132 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:03.132 { 00:29:03.132 "nbd_device": "/dev/nbd0", 00:29:03.132 "bdev_name": "raid5f" 00:29:03.132 } 00:29:03.132 ]' 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:03.391 256+0 records in 00:29:03.391 256+0 records out 00:29:03.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00429933 s, 244 MB/s 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:03.391 256+0 records in 00:29:03.391 256+0 records out 00:29:03.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301423 s, 34.8 MB/s 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:03.391 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:03.650 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:03.650 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:03.650 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:03.650 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:03.650 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:03.650 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:03.650 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:03.650 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:03.650 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:03.650 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:03.650 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:03.909 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:03.909 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:03.909 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:03.909 07:00:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:03.909 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:03.909 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:03.909 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:03.909 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:03.909 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:03.909 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:29:03.909 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:03.909 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:29:03.909 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:03.909 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:03.909 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:29:03.909 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:03.909 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:03.909 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:04.168 malloc_lvol_verify 00:29:04.168 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:04.435 30f8ae15-b9de-4397-a47f-2aa63063f4de 00:29:04.435 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:04.704 9d1ffb86-c811-4c32-a663-edbbf07f0b9e 00:29:04.704 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:04.704 /dev/nbd0 00:29:04.704 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:04.963 mke2fs 1.47.0 (5-Feb-2023) 00:29:04.963 Discarding device blocks: 0/4096 done 00:29:04.963 Creating filesystem with 4096 1k blocks and 1024 inodes 00:29:04.963 00:29:04.963 Allocating group tables: 0/1 done 00:29:04.963 Writing inode tables: 0/1 done 00:29:04.963 Creating journal (1024 blocks): done 00:29:04.963 Writing superblocks and filesystem accounting information: 0/1 done 00:29:04.963 00:29:04.963 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:04.963 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:04.963 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:04.963 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:04.963 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:04.963 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:04.963 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:04.963 07:00:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:04.963 07:00:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:04.963 07:00:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:04.963 07:00:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:04.963 07:00:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:04.963 07:00:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:04.963 07:00:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:04.963 07:00:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:04.963 07:00:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:04.963 07:00:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:04.963 07:00:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:29:04.964 07:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 111449 00:29:04.964 07:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 111449 ']' 00:29:04.964 07:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 111449 00:29:04.964 07:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:29:04.964 07:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:04.964 07:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111449 00:29:05.223 07:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:05.223 killing process with pid 111449 00:29:05.223 07:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:05.223 07:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111449' 00:29:05.223 07:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@965 -- # kill 111449 00:29:05.223 07:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # wait 111449 00:29:05.482 07:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:29:05.482 00:29:05.482 real 0m4.717s 00:29:05.482 user 0m7.072s 00:29:05.482 sys 0m1.226s 00:29:05.482 07:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:05.482 07:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:05.482 ************************************ 00:29:05.482 END TEST bdev_nbd 00:29:05.482 ************************************ 00:29:05.482 07:00:32 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:29:05.482 07:00:32 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:29:05.482 07:00:32 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:29:05.482 07:00:32 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:29:05.482 07:00:32 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:05.482 07:00:32 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:05.482 07:00:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:05.482 ************************************ 00:29:05.482 START TEST bdev_fio 00:29:05.482 ************************************ 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:29:05.482 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:29:05.482 ************************************ 00:29:05.482 START TEST bdev_fio_rw_verify 00:29:05.482 ************************************ 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:05.482 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:05.483 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:05.483 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:05.483 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:05.483 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:29:05.483 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:05.483 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:05.483 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:05.483 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:05.483 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:29:05.483 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:05.483 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:05.483 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:29:05.483 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:05.483 07:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:05.741 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:05.741 fio-3.35 00:29:05.741 Starting 1 thread 00:29:17.987 00:29:17.987 job_raid5f: (groupid=0, jobs=1): err= 0: pid=111632: Wed Aug 14 07:00:43 2024 00:29:17.987 read: IOPS=8345, BW=32.6MiB/s (34.2MB/s)(326MiB/10001msec) 00:29:17.987 slat (usec): min=19, max=109, avg=28.44, stdev= 3.72 00:29:17.987 clat (usec): min=13, max=501, avg=191.64, stdev=68.81 00:29:17.987 lat (usec): min=39, max=529, avg=220.08, stdev=69.38 00:29:17.987 clat percentiles (usec): 00:29:17.987 | 50.000th=[ 190], 99.000th=[ 326], 99.900th=[ 388], 99.990th=[ 441], 00:29:17.987 | 99.999th=[ 502] 00:29:17.987 write: IOPS=8744, BW=34.2MiB/s (35.8MB/s)(337MiB/9868msec); 0 zone resets 00:29:17.987 slat (usec): min=9, max=277, avg=24.91, stdev= 5.76 00:29:17.987 clat (usec): min=84, max=1835, avg=435.45, stdev=65.92 00:29:17.987 lat (usec): min=108, max=2056, avg=460.36, stdev=67.68 00:29:17.987 clat percentiles (usec): 00:29:17.987 | 50.000th=[ 441], 99.000th=[ 603], 99.900th=[ 734], 99.990th=[ 1418], 00:29:17.987 | 99.999th=[ 1844] 00:29:17.987 bw ( KiB/s): min=31672, max=38640, per=99.14%, avg=34677.05, stdev=1583.51, samples=19 00:29:17.987 iops : min= 7918, max= 9660, avg=8669.26, stdev=395.88, samples=19 00:29:17.987 lat (usec) : 20=0.01%, 100=5.82%, 250=31.71%, 500=55.98%, 750=6.45% 00:29:17.987 lat (usec) : 1000=0.03% 00:29:17.987 lat (msec) : 2=0.02% 00:29:17.987 cpu : usr=98.82%, sys=0.43%, ctx=47, majf=0, minf=10347 00:29:17.987 IO depths : 1=7.8%, 2=20.0%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:17.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.987 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.987 issued rwts: total=83463,86288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:17.987 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:17.987 00:29:17.987 Run status group 0 (all jobs): 00:29:17.987 READ: bw=32.6MiB/s (34.2MB/s), 32.6MiB/s-32.6MiB/s (34.2MB/s-34.2MB/s), io=326MiB (342MB), run=10001-10001msec 00:29:17.987 WRITE: bw=34.2MiB/s (35.8MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.8MB/s), io=337MiB (353MB), run=9868-9868msec 00:29:17.987 ----------------------------------------------------- 00:29:17.987 Suppressions used: 00:29:17.987 count bytes template 00:29:17.987 1 7 /usr/src/fio/parse.c 00:29:17.987 244 23424 /usr/src/fio/iolog.c 00:29:17.987 1 8 libtcmalloc_minimal.so 00:29:17.987 1 904 libcrypto.so 00:29:17.987 ----------------------------------------------------- 00:29:17.987 00:29:17.987 00:29:17.987 real 0m11.246s 00:29:17.987 user 0m11.680s 00:29:17.987 sys 0m0.606s 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:29:17.987 ************************************ 00:29:17.987 END TEST bdev_fio_rw_verify 00:29:17.987 ************************************ 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:29:17.987 07:00:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:29:17.987 07:00:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:29:17.987 07:00:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "d78ecaf9-6c70-45d5-a469-66b08b965d6e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d78ecaf9-6c70-45d5-a469-66b08b965d6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "d78ecaf9-6c70-45d5-a469-66b08b965d6e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9c022fe6-e216-41e2-bbc6-14ff8d1258e5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "23c5e8d9-a209-4257-bad2-88071ae0efcc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "4cab3aa0-74b4-4e07-94b2-9404bc604625",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:29:17.987 07:00:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:29:17.987 07:00:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:17.987 07:00:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:29:17.987 /home/vagrant/spdk_repo/spdk 00:29:17.987 07:00:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:29:17.987 07:00:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:29:17.987 00:29:17.987 real 0m11.489s 00:29:17.987 user 0m11.810s 00:29:17.987 sys 0m0.703s 00:29:17.987 07:00:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:17.987 07:00:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:29:17.988 ************************************ 00:29:17.988 END TEST bdev_fio 00:29:17.988 ************************************ 00:29:17.988 07:00:44 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:17.988 07:00:44 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:17.988 07:00:44 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:29:17.988 07:00:44 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:17.988 07:00:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:17.988 ************************************ 00:29:17.988 START TEST bdev_verify 00:29:17.988 ************************************ 00:29:17.988 07:00:44 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:17.988 [2024-08-14 07:00:44.206685] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:29:17.988 [2024-08-14 07:00:44.206834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111780 ] 00:29:17.988 [2024-08-14 07:00:44.355893] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:17.988 [2024-08-14 07:00:44.411946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.988 [2024-08-14 07:00:44.412035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.988 Running I/O for 5 seconds... 00:29:23.264 00:29:23.264 Latency(us) 00:29:23.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.264 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:23.264 Verification LBA range: start 0x0 length 0x2000 00:29:23.264 raid5f : 5.02 5791.88 22.62 0.00 0.00 33055.06 270.09 27931.50 00:29:23.264 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:23.264 Verification LBA range: start 0x2000 length 0x2000 00:29:23.264 raid5f : 5.01 5795.61 22.64 0.00 0.00 33099.81 291.55 28274.92 00:29:23.264 =================================================================================================================== 00:29:23.264 Total : 11587.49 45.26 0.00 0.00 33077.44 270.09 28274.92 00:29:23.264 00:29:23.264 real 0m5.770s 00:29:23.264 user 0m10.694s 00:29:23.264 sys 0m0.246s 00:29:23.264 ************************************ 00:29:23.264 END TEST bdev_verify 00:29:23.264 ************************************ 00:29:23.264 07:00:49 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:23.264 07:00:49 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:29:23.264 07:00:49 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:23.264 07:00:49 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:29:23.264 07:00:49 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:23.264 07:00:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:23.264 ************************************ 00:29:23.264 START TEST bdev_verify_big_io 00:29:23.264 ************************************ 00:29:23.264 07:00:49 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:23.264 [2024-08-14 07:00:50.055779] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:29:23.264 [2024-08-14 07:00:50.056039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111856 ] 00:29:23.264 [2024-08-14 07:00:50.207753] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:23.264 [2024-08-14 07:00:50.266029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.264 [2024-08-14 07:00:50.266159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.264 Running I/O for 5 seconds... 00:29:28.571 00:29:28.571 Latency(us) 00:29:28.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.571 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:28.571 Verification LBA range: start 0x0 length 0x200 00:29:28.571 raid5f : 5.21 426.53 26.66 0.00 0.00 7277062.70 203.01 386462.07 00:29:28.571 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:28.571 Verification LBA range: start 0x200 length 0x200 00:29:28.571 raid5f : 5.24 436.14 27.26 0.00 0.00 7149705.42 146.67 375472.63 00:29:28.571 =================================================================================================================== 00:29:28.571 Total : 862.67 53.92 0.00 0.00 7212465.69 146.67 386462.07 00:29:28.830 ************************************ 00:29:28.830 END TEST bdev_verify_big_io 00:29:28.830 ************************************ 00:29:28.830 00:29:28.830 real 0m5.987s 00:29:28.830 user 0m11.141s 00:29:28.830 sys 0m0.236s 00:29:28.830 07:00:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:28.830 07:00:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:29:28.830 07:00:56 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:28.830 07:00:56 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:29:28.830 07:00:56 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:28.830 07:00:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:28.830 ************************************ 00:29:28.830 START TEST bdev_write_zeroes 00:29:28.830 ************************************ 00:29:28.830 07:00:56 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:29.088 [2024-08-14 07:00:56.098577] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:29:29.088 [2024-08-14 07:00:56.098708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111937 ] 00:29:29.088 [2024-08-14 07:00:56.243957] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.088 [2024-08-14 07:00:56.294351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.346 Running I/O for 1 seconds... 00:29:30.279 00:29:30.279 Latency(us) 00:29:30.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.279 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:30.279 raid5f : 1.01 24872.49 97.16 0.00 0.00 5128.94 1488.15 7183.20 00:29:30.279 =================================================================================================================== 00:29:30.279 Total : 24872.49 97.16 0.00 0.00 5128.94 1488.15 7183.20 00:29:30.537 00:29:30.537 real 0m1.734s 00:29:30.537 user 0m1.401s 00:29:30.537 sys 0m0.212s 00:29:30.537 07:00:57 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:30.537 07:00:57 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:29:30.537 ************************************ 00:29:30.537 END TEST bdev_write_zeroes 00:29:30.537 ************************************ 00:29:30.796 07:00:57 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:30.796 07:00:57 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:29:30.796 07:00:57 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:30.796 07:00:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:30.796 ************************************ 00:29:30.796 START TEST bdev_json_nonenclosed 00:29:30.796 ************************************ 00:29:30.796 07:00:57 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:30.796 [2024-08-14 07:00:57.897426] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:29:30.796 [2024-08-14 07:00:57.897551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111975 ] 00:29:30.796 [2024-08-14 07:00:58.044381] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.054 [2024-08-14 07:00:58.097693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.054 [2024-08-14 07:00:58.097795] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:31.054 [2024-08-14 07:00:58.097820] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:31.054 [2024-08-14 07:00:58.097831] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:31.054 00:29:31.054 real 0m0.399s 00:29:31.054 user 0m0.187s 00:29:31.054 sys 0m0.108s 00:29:31.054 07:00:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:31.054 ************************************ 00:29:31.054 END TEST bdev_json_nonenclosed 00:29:31.054 ************************************ 00:29:31.054 07:00:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:29:31.054 07:00:58 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:31.054 07:00:58 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:29:31.054 07:00:58 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:31.054 07:00:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:31.054 ************************************ 00:29:31.054 START TEST bdev_json_nonarray 00:29:31.054 ************************************ 00:29:31.054 07:00:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:31.313 [2024-08-14 07:00:58.360064] Starting SPDK v24.09-pre git sha1 d47670264 / DPDK 22.11.4 initialization... 00:29:31.313 [2024-08-14 07:00:58.360223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111995 ] 00:29:31.313 [2024-08-14 07:00:58.504392] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.313 [2024-08-14 07:00:58.556708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.313 [2024-08-14 07:00:58.556828] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:31.313 [2024-08-14 07:00:58.556857] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:31.313 [2024-08-14 07:00:58.556867] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:31.571 00:29:31.571 real 0m0.402s 00:29:31.571 user 0m0.187s 00:29:31.571 sys 0m0.111s 00:29:31.571 07:00:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:31.571 ************************************ 00:29:31.571 END TEST bdev_json_nonarray 00:29:31.571 ************************************ 00:29:31.571 07:00:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:29:31.571 07:00:58 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:29:31.571 07:00:58 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:29:31.571 07:00:58 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:29:31.571 07:00:58 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:29:31.572 07:00:58 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:29:31.572 07:00:58 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:31.572 07:00:58 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:31.572 07:00:58 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:29:31.572 07:00:58 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:29:31.572 07:00:58 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:29:31.572 07:00:58 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:29:31.572 ************************************ 00:29:31.572 END TEST blockdev_raid5f 00:29:31.572 ************************************ 00:29:31.572 00:29:31.572 real 0m35.202s 00:29:31.572 user 0m48.830s 00:29:31.572 sys 0m4.359s 00:29:31.572 07:00:58 blockdev_raid5f -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:31.572 07:00:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:31.572 07:00:58 -- spdk/autotest.sh@207 -- # uname -s 00:29:31.572 07:00:58 -- spdk/autotest.sh@207 -- # [[ Linux == Linux ]] 00:29:31.572 07:00:58 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:29:31.572 07:00:58 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:29:31.572 07:00:58 -- spdk/autotest.sh@220 -- # '[' 0 -eq 1 ']' 00:29:31.572 07:00:58 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:29:31.572 07:00:58 -- spdk/autotest.sh@269 -- # timing_exit lib 00:29:31.572 07:00:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:31.572 07:00:58 -- common/autotest_common.sh@10 -- # set +x 00:29:31.830 07:00:58 -- spdk/autotest.sh@271 -- # '[' 0 -eq 1 ']' 00:29:31.830 07:00:58 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:29:31.830 07:00:58 -- spdk/autotest.sh@285 -- # '[' 0 -eq 1 ']' 00:29:31.830 07:00:58 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:31.830 07:00:58 -- spdk/autotest.sh@323 -- # '[' 0 -eq 1 ']' 00:29:31.830 07:00:58 -- spdk/autotest.sh@327 -- # '[' 0 -eq 1 ']' 00:29:31.830 07:00:58 -- spdk/autotest.sh@332 -- # '[' 0 -eq 1 ']' 00:29:31.830 07:00:58 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:29:31.830 07:00:58 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:31.830 07:00:58 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:31.830 07:00:58 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:29:31.831 07:00:58 -- spdk/autotest.sh@358 -- # '[' 0 -eq 1 ']' 00:29:31.831 07:00:58 -- spdk/autotest.sh@363 -- # '[' 0 -eq 1 ']' 00:29:31.831 07:00:58 -- spdk/autotest.sh@367 -- # '[' 0 -eq 1 ']' 00:29:31.831 07:00:58 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:31.831 07:00:58 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:29:31.831 07:00:58 -- spdk/autotest.sh@382 -- # [[ 0 -eq 1 ]] 00:29:31.831 07:00:58 -- spdk/autotest.sh@386 -- # [[ '' -eq 1 ]] 00:29:31.831 07:00:58 -- spdk/autotest.sh@391 -- # trap - SIGINT SIGTERM EXIT 00:29:31.831 07:00:58 -- spdk/autotest.sh@393 -- # timing_enter post_cleanup 00:29:31.831 07:00:58 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:31.831 07:00:58 -- common/autotest_common.sh@10 -- # set +x 00:29:31.831 07:00:58 -- spdk/autotest.sh@394 -- # autotest_cleanup 00:29:31.831 07:00:58 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:29:31.831 07:00:58 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:29:31.831 07:00:58 -- common/autotest_common.sh@10 -- # set +x 00:29:33.733 INFO: APP EXITING 00:29:33.733 INFO: killing all VMs 00:29:33.733 INFO: killing vhost app 00:29:33.733 INFO: EXIT DONE 00:29:33.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:33.992 Waiting for block devices as requested 00:29:33.992 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:33.992 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:34.930 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:34.930 Cleaning 00:29:34.930 Removing: /var/run/dpdk/spdk0/config 00:29:34.930 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:34.930 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:34.930 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:34.930 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:34.930 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:34.930 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:34.930 Removing: /dev/shm/spdk_tgt_trace.pid68115 00:29:34.930 Removing: /var/run/dpdk/spdk0 00:29:34.930 Removing: /var/run/dpdk/spdk_pid100446 00:29:34.930 Removing: /var/run/dpdk/spdk_pid100939 00:29:34.930 Removing: /var/run/dpdk/spdk_pid103910 00:29:34.930 Removing: /var/run/dpdk/spdk_pid104726 00:29:34.930 Removing: /var/run/dpdk/spdk_pid105291 00:29:34.930 Removing: /var/run/dpdk/spdk_pid106568 00:29:34.930 Removing: /var/run/dpdk/spdk_pid107050 00:29:34.930 Removing: /var/run/dpdk/spdk_pid108200 00:29:34.930 Removing: /var/run/dpdk/spdk_pid108684 00:29:34.930 Removing: /var/run/dpdk/spdk_pid109822 00:29:34.930 Removing: /var/run/dpdk/spdk_pid110310 00:29:34.930 Removing: /var/run/dpdk/spdk_pid111086 00:29:34.930 Removing: /var/run/dpdk/spdk_pid111331 00:29:34.930 Removing: /var/run/dpdk/spdk_pid111369 00:29:34.930 Removing: /var/run/dpdk/spdk_pid111396 00:29:34.930 Removing: /var/run/dpdk/spdk_pid111617 00:29:34.930 Removing: /var/run/dpdk/spdk_pid111780 00:29:34.930 Removing: /var/run/dpdk/spdk_pid111856 00:29:34.930 Removing: /var/run/dpdk/spdk_pid111937 00:29:34.930 Removing: /var/run/dpdk/spdk_pid111975 00:29:34.930 Removing: /var/run/dpdk/spdk_pid111995 00:29:34.930 Removing: /var/run/dpdk/spdk_pid67960 00:29:35.191 Removing: /var/run/dpdk/spdk_pid68115 00:29:35.191 Removing: /var/run/dpdk/spdk_pid68313 00:29:35.191 Removing: /var/run/dpdk/spdk_pid68397 00:29:35.191 Removing: /var/run/dpdk/spdk_pid68425 00:29:35.191 Removing: /var/run/dpdk/spdk_pid68542 00:29:35.191 Removing: /var/run/dpdk/spdk_pid68560 00:29:35.191 Removing: /var/run/dpdk/spdk_pid68713 00:29:35.191 Removing: /var/run/dpdk/spdk_pid68784 00:29:35.191 Removing: /var/run/dpdk/spdk_pid68850 00:29:35.191 Removing: /var/run/dpdk/spdk_pid68942 00:29:35.191 Removing: /var/run/dpdk/spdk_pid69020 00:29:35.191 Removing: /var/run/dpdk/spdk_pid69054 00:29:35.191 Removing: /var/run/dpdk/spdk_pid69095 00:29:35.191 Removing: /var/run/dpdk/spdk_pid69153 00:29:35.191 Removing: /var/run/dpdk/spdk_pid69259 00:29:35.191 Removing: /var/run/dpdk/spdk_pid69671 00:29:35.191 Removing: /var/run/dpdk/spdk_pid69724 00:29:35.191 Removing: /var/run/dpdk/spdk_pid69776 00:29:35.191 Removing: /var/run/dpdk/spdk_pid69786 00:29:35.191 Removing: /var/run/dpdk/spdk_pid69850 00:29:35.191 Removing: /var/run/dpdk/spdk_pid69866 00:29:35.191 Removing: /var/run/dpdk/spdk_pid69928 00:29:35.191 Removing: /var/run/dpdk/spdk_pid69940 00:29:35.191 Removing: /var/run/dpdk/spdk_pid69993 00:29:35.191 Removing: /var/run/dpdk/spdk_pid70011 00:29:35.191 Removing: /var/run/dpdk/spdk_pid70053 00:29:35.191 Removing: /var/run/dpdk/spdk_pid70071 00:29:35.191 Removing: /var/run/dpdk/spdk_pid70190 00:29:35.191 Removing: /var/run/dpdk/spdk_pid70232 00:29:35.191 Removing: /var/run/dpdk/spdk_pid70302 00:29:35.191 Removing: /var/run/dpdk/spdk_pid71753 00:29:35.191 Removing: /var/run/dpdk/spdk_pid72086 00:29:35.191 Removing: /var/run/dpdk/spdk_pid72257 00:29:35.191 Removing: /var/run/dpdk/spdk_pid73111 00:29:35.191 Removing: /var/run/dpdk/spdk_pid73461 00:29:35.191 Removing: /var/run/dpdk/spdk_pid73632 00:29:35.191 Removing: /var/run/dpdk/spdk_pid74519 00:29:35.191 Removing: /var/run/dpdk/spdk_pid75037 00:29:35.191 Removing: /var/run/dpdk/spdk_pid75207 00:29:35.191 Removing: /var/run/dpdk/spdk_pid77244 00:29:35.191 Removing: /var/run/dpdk/spdk_pid77690 00:29:35.191 Removing: /var/run/dpdk/spdk_pid77859 00:29:35.191 Removing: /var/run/dpdk/spdk_pid79887 00:29:35.191 Removing: /var/run/dpdk/spdk_pid80347 00:29:35.191 Removing: /var/run/dpdk/spdk_pid80527 00:29:35.191 Removing: /var/run/dpdk/spdk_pid82519 00:29:35.191 Removing: /var/run/dpdk/spdk_pid83198 00:29:35.191 Removing: /var/run/dpdk/spdk_pid83372 00:29:35.191 Removing: /var/run/dpdk/spdk_pid85557 00:29:35.191 Removing: /var/run/dpdk/spdk_pid86045 00:29:35.191 Removing: /var/run/dpdk/spdk_pid86234 00:29:35.191 Removing: /var/run/dpdk/spdk_pid88441 00:29:35.191 Removing: /var/run/dpdk/spdk_pid88942 00:29:35.191 Removing: /var/run/dpdk/spdk_pid89129 00:29:35.191 Removing: /var/run/dpdk/spdk_pid91352 00:29:35.191 Removing: /var/run/dpdk/spdk_pid92149 00:29:35.191 Removing: /var/run/dpdk/spdk_pid92339 00:29:35.191 Removing: /var/run/dpdk/spdk_pid92521 00:29:35.191 Removing: /var/run/dpdk/spdk_pid93002 00:29:35.191 Removing: /var/run/dpdk/spdk_pid93856 00:29:35.191 Removing: /var/run/dpdk/spdk_pid94285 00:29:35.191 Removing: /var/run/dpdk/spdk_pid95110 00:29:35.191 Removing: /var/run/dpdk/spdk_pid95616 00:29:35.191 Removing: /var/run/dpdk/spdk_pid96520 00:29:35.451 Removing: /var/run/dpdk/spdk_pid96990 00:29:35.451 Removing: /var/run/dpdk/spdk_pid99741 00:29:35.451 Clean 00:29:35.451 07:01:02 -- common/autotest_common.sh@1447 -- # return 0 00:29:35.451 07:01:02 -- spdk/autotest.sh@395 -- # timing_exit post_cleanup 00:29:35.451 07:01:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.451 07:01:02 -- common/autotest_common.sh@10 -- # set +x 00:29:35.451 07:01:02 -- spdk/autotest.sh@397 -- # timing_exit autotest 00:29:35.451 07:01:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.451 07:01:02 -- common/autotest_common.sh@10 -- # set +x 00:29:35.451 07:01:02 -- spdk/autotest.sh@398 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:35.451 07:01:02 -- spdk/autotest.sh@400 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:35.451 07:01:02 -- spdk/autotest.sh@400 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:35.451 07:01:02 -- spdk/autotest.sh@402 -- # hash lcov 00:29:35.451 07:01:02 -- spdk/autotest.sh@402 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:35.451 07:01:02 -- spdk/autotest.sh@404 -- # hostname 00:29:35.451 07:01:02 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:35.711 geninfo: WARNING: invalid characters removed from testname! 00:29:57.664 07:01:24 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:01.006 07:01:27 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:02.908 07:01:29 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:04.815 07:01:32 -- spdk/autotest.sh@408 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:07.350 07:01:34 -- spdk/autotest.sh@409 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:09.252 07:01:36 -- spdk/autotest.sh@410 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:11.785 07:01:38 -- spdk/autotest.sh@411 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:11.785 07:01:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:11.785 07:01:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:11.785 07:01:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.786 07:01:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.786 07:01:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.786 07:01:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.786 07:01:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.786 07:01:38 -- paths/export.sh@5 -- $ export PATH 00:30:11.786 07:01:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.786 07:01:38 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:11.786 07:01:38 -- common/autobuild_common.sh@447 -- $ date +%s 00:30:11.786 07:01:38 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1723618898.XXXXXX 00:30:11.786 07:01:38 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1723618898.KEFs3W 00:30:11.786 07:01:38 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:30:11.786 07:01:38 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:30:11.786 07:01:38 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:30:11.786 07:01:38 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:30:11.786 07:01:38 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:11.786 07:01:38 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:11.786 07:01:38 -- common/autobuild_common.sh@463 -- $ get_config_params 00:30:11.786 07:01:38 -- common/autotest_common.sh@394 -- $ xtrace_disable 00:30:11.786 07:01:38 -- common/autotest_common.sh@10 -- $ set +x 00:30:11.786 07:01:38 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:30:11.786 07:01:38 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:30:11.786 07:01:38 -- pm/common@17 -- $ local monitor 00:30:11.786 07:01:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:11.786 07:01:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:11.786 07:01:38 -- pm/common@25 -- $ sleep 1 00:30:11.786 07:01:38 -- pm/common@21 -- $ date +%s 00:30:11.786 07:01:38 -- pm/common@21 -- $ date +%s 00:30:11.786 07:01:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1723618898 00:30:11.786 07:01:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1723618898 00:30:11.786 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1723618898_collect-vmstat.pm.log 00:30:11.786 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1723618898_collect-cpu-load.pm.log 00:30:12.742 07:01:39 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:30:12.742 07:01:39 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:30:12.742 07:01:39 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:30:12.742 07:01:39 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:12.742 07:01:39 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:12.742 07:01:39 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:12.742 07:01:39 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:12.742 07:01:39 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:12.742 07:01:39 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:12.742 07:01:39 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:12.742 07:01:39 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:12.742 07:01:39 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:12.742 07:01:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:12.742 07:01:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:12.742 07:01:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:12.742 07:01:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:12.742 07:01:39 -- pm/common@44 -- $ pid=113480 00:30:12.742 07:01:39 -- pm/common@50 -- $ kill -TERM 113480 00:30:12.742 07:01:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:12.742 07:01:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:12.742 07:01:39 -- pm/common@44 -- $ pid=113482 00:30:12.742 07:01:39 -- pm/common@50 -- $ kill -TERM 113482 00:30:12.742 + [[ -n 6170 ]] 00:30:12.742 + sudo kill 6170 00:30:13.011 [Pipeline] } 00:30:13.027 [Pipeline] // timeout 00:30:13.032 [Pipeline] } 00:30:13.048 [Pipeline] // stage 00:30:13.053 [Pipeline] } 00:30:13.068 [Pipeline] // catchError 00:30:13.078 [Pipeline] stage 00:30:13.080 [Pipeline] { (Stop VM) 00:30:13.093 [Pipeline] sh 00:30:13.372 + vagrant halt 00:30:16.662 ==> default: Halting domain... 00:30:24.805 [Pipeline] sh 00:30:25.086 + vagrant destroy -f 00:30:27.620 ==> default: Removing domain... 00:30:27.890 [Pipeline] sh 00:30:28.171 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:30:28.179 [Pipeline] } 00:30:28.191 [Pipeline] // stage 00:30:28.197 [Pipeline] } 00:30:28.211 [Pipeline] // dir 00:30:28.216 [Pipeline] } 00:30:28.231 [Pipeline] // wrap 00:30:28.238 [Pipeline] } 00:30:28.251 [Pipeline] // catchError 00:30:28.261 [Pipeline] stage 00:30:28.263 [Pipeline] { (Epilogue) 00:30:28.276 [Pipeline] sh 00:30:28.554 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:33.830 [Pipeline] catchError 00:30:33.832 [Pipeline] { 00:30:33.845 [Pipeline] sh 00:30:34.162 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:34.162 Artifacts sizes are good 00:30:34.172 [Pipeline] } 00:30:34.188 [Pipeline] // catchError 00:30:34.200 [Pipeline] archiveArtifacts 00:30:34.207 Archiving artifacts 00:30:34.327 [Pipeline] cleanWs 00:30:34.338 [WS-CLEANUP] Deleting project workspace... 00:30:34.338 [WS-CLEANUP] Deferred wipeout is used... 00:30:34.345 [WS-CLEANUP] done 00:30:34.347 [Pipeline] } 00:30:34.362 [Pipeline] // stage 00:30:34.367 [Pipeline] } 00:30:34.382 [Pipeline] // node 00:30:34.387 [Pipeline] End of Pipeline 00:30:34.425 Finished: SUCCESS